Science.gov

Sample records for adequate sample size

  1. Quantifying variability within water samples: the need for adequate subsampling.

    PubMed

    Donohue, Ian; Irvine, Kenneth

    2008-01-01

    Accurate and precise determination of the concentration of nutrients and other substances in waterbodies is an essential requirement for supporting effective management and legislation. Owing primarily to logistic and financial constraints, however, national and regional agencies responsible for monitoring surface waters tend to quantify chemical indicators of water quality using a single sample from each waterbody, thus largely ignoring spatial variability. We show here that total sample variability, which comprises both analytical variability and within-sample heterogeneity, of a number of important chemical indicators of water quality (chlorophyll a, total phosphorus, total nitrogen, soluble molybdate-reactive phosphorus and dissolved inorganic nitrogen) varies significantly both over time and among determinands, and can be extremely high. Within-sample heterogeneity, whose mean contribution to total sample variability ranged between 62% and 100%, was significantly higher in samples taken from rivers compared with those from lakes, and was shown to be reduced by filtration. Our results show clearly that neither a single sample, nor even two sub-samples from that sample is adequate for the reliable, and statistically robust, detection of changes in the quality of surface waters. We recommend strongly that, in situations where it is practicable to take only a single sample from a waterbody, a minimum of three sub-samples are analysed from that sample for robust quantification of both the concentrations of determinands and total sample variability. PMID:17706740

  2. Sample size calculation: Basic principles

    PubMed Central

    Das, Sabyasachi; Mitra, Koel; Mandal, Mohanchandra

    2016-01-01

    Addressing a sample size is a practical issue that has to be solved during planning and designing stage of the study. The aim of any clinical research is to detect the actual difference between two groups (power) and to provide an estimate of the difference with a reasonable accuracy (precision). Hence, researchers should do a priori estimate of sample size well ahead, before conducting the study. Post hoc sample size computation is not encouraged conventionally. Adequate sample size minimizes the random error or in other words, lessens something happening by chance. Too small a sample may fail to answer the research question and can be of questionable validity or provide an imprecise answer while too large a sample may answer the question but is resource-intensive and also may be unethical. More transparency in the calculation of sample size is required so that it can be justified and replicated while reporting. PMID:27729692

  3. Phylogenetic effective sample size.

    PubMed

    Bartoszek, Krzysztof

    2016-10-21

    In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. PMID:27343033

  4. Bronchoalveolar Lavage (BAL) for Research; Obtaining Adequate Sample Yield

    PubMed Central

    Collins, Andrea M.; Rylance, Jamie; Wootton, Daniel G.; Wright, Angela D.; Wright, Adam K. A.; Fullerton, Duncan G.; Gordon, Stephen B.

    2014-01-01

    We describe a research technique for fiberoptic bronchoscopy with bronchoalveolar lavage (BAL) using manual hand held suction in order to remove nonadherent cells and lung lining fluid from the mucosal surface. In research environments, BAL allows sampling of innate (lung macrophage), cellular (B- and T- cells), and humoral (immunoglobulin) responses within the lung. BAL is internationally accepted for research purposes and since 1999 the technique has been performed in > 1,000 subjects in the UK and Malawi by our group. Our technique uses gentle hand-held suction of instilled fluid; this is designed to maximize BAL volume returned and apply minimum shear force on ciliated epithelia in order to preserve the structure and function of cells within the BAL fluid and to preserve viability to facilitate the growth of cells in ex vivo culture. The research technique therefore uses a larger volume instillate (typically in the order of 200 ml) and employs manual suction to reduce cell damage. Patients are given local anesthetic, offered conscious sedation (midazolam), and tolerate the procedure well with minimal side effects. Verbal and written subject information improves tolerance and written informed consent is mandatory. Safety of the subject is paramount. Subjects are carefully selected using clear inclusion and exclusion criteria. This protocol includes a description of the potential risks, and the steps taken to mitigate them, a list of contraindications, pre- and post-procedure checks, as well as precise bronchoscopy and laboratory techniques. PMID:24686157

  5. Bronchoalveolar lavage (BAL) for research; obtaining adequate sample yield.

    PubMed

    Collins, Andrea M; Rylance, Jamie; Wootton, Daniel G; Wright, Angela D; Wright, Adam K A; Fullerton, Duncan G; Gordon, Stephen B

    2014-01-01

    We describe a research technique for fiberoptic bronchoscopy with bronchoalveolar lavage (BAL) using manual hand held suction in order to remove nonadherent cells and lung lining fluid from the mucosal surface. In research environments, BAL allows sampling of innate (lung macrophage), cellular (B- and T- cells), and humoral (immunoglobulin) responses within the lung. BAL is internationally accepted for research purposes and since 1999 the technique has been performed in > 1,000 subjects in the UK and Malawi by our group. Our technique uses gentle hand-held suction of instilled fluid; this is designed to maximize BAL volume returned and apply minimum shear force on ciliated epithelia in order to preserve the structure and function of cells within the BAL fluid and to preserve viability to facilitate the growth of cells in ex vivo culture. The research technique therefore uses a larger volume instillate (typically in the order of 200 ml) and employs manual suction to reduce cell damage. Patients are given local anesthetic, offered conscious sedation (midazolam), and tolerate the procedure well with minimal side effects. Verbal and written subject information improves tolerance and written informed consent is mandatory. Safety of the subject is paramount. Subjects are carefully selected using clear inclusion and exclusion criteria. This protocol includes a description of the potential risks, and the steps taken to mitigate them, a list of contraindications, pre- and post-procedure checks, as well as precise bronchoscopy and laboratory techniques.

  6. Biostatistics Series Module 5: Determining Sample Size.

    PubMed

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 - β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the

  7. Biostatistics Series Module 5: Determining Sample Size.

    PubMed

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 - β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the

  8. Biostatistics Series Module 5: Determining Sample Size

    PubMed Central

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the

  9. Biostatistics Series Module 5: Determining Sample Size

    PubMed Central

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the

  10. On Sample Sizes for Non-Matched-Pair IR Experiments.

    ERIC Educational Resources Information Center

    Robertson, S. E.

    1990-01-01

    Discusses the problem of determining an adequate sample size for an information retrieval experiment comparing two systems on separate samples of requests. The application of statistical methods to information retrieval experiments is discussed, the Mann-Whitney U Test is used for determining minimum sample sizes, and variables and distributions…

  11. Sample Size and Correlational Inference

    ERIC Educational Resources Information Center

    Anderson, Richard B.; Doherty, Michael E.; Friedrich, Jeff C.

    2008-01-01

    In 4 studies, the authors examined the hypothesis that the structure of the informational environment makes small samples more informative than large ones for drawing inferences about population correlations. The specific purpose of the studies was to test predictions arising from the signal detection simulations of R. B. Anderson, M. E. Doherty,…

  12. In situ sampling in coastal waters - in search for an adequate spatial resolution for chlorophyll monitoring

    NASA Astrophysics Data System (ADS)

    Tolvanen, H.; Suominen, T.

    2012-04-01

    Shallow coastal archipelagos give rise to highly dynamic water quality patterns. In situ sampling inevitably loses detail of this spatio-temporal variation, regardless of the spatial and temporal resolution of the monitoring. In the shallow coastal areas of SW Finland in the Baltic Sea, the spatio-temporal variation of water properties is especially high due to the complexity of the archipelago environment and its bathymetry. Water quality monitoring is traditionally carried out in situ on a point network with 5-20 km distance between the sampling stations. Also the temporal coverage is irregular and often focused to the high summer (late July to early August) to capture the highest algal occurrences resulting from eutrophication. The amount of phytoplankton may have irregular vertical variation caused by local prevailing conditions, and therefore the biomass within the productive layer is usually measured by the amount of chlorophyll as a collective sample of the single vertical profile per station. However, the amount of phytoplankton varies also horizontally over short distances in the coastal water that may be homogenous in temperature and salinity. We tested the representativeness of the traditional single sampling station method by expanding the measurement station into six parallel sampling points within a 0.25 km2 area around the station. We measured the chlorophyll content in depth profiles from 1 m to 10 m depth using an optical water quality sonde. This sampling scheme provides us with a better understanding of the occurrence and distribution of phytoplankton in the water mass. The data include three six-point stations in different parts of the coastal archipelago. All stations were sampled several times during the growing season of 2007. In this paper, we compare the results of the established one-point collective depth sampling with the locally extended sampling scheme that portrays also the small-scale horizontal variation of phytoplankton. We

  13. On Sample Size Requirements for Johansen's Test.

    ERIC Educational Resources Information Center

    Coombs, William T.; Algina, James

    1996-01-01

    Type I error rates for the Johansen test were estimated using simulated data for a variety of conditions. Results indicate that Type I error rates for the Johansen test depend heavily on the number of groups and the ratio of the smallest sample size to the number of dependent variables. Sample size guidelines are presented. (SLD)

  14. Sample size estimation in prevalence studies.

    PubMed

    Arya, Ravindra; Antonisamy, Belavendra; Kumar, Sushil

    2012-11-01

    Estimation of appropriate sample size for prevalence surveys presents many challenges, particularly when the condition is very rare or has a tendency for geographical clustering. Sample size estimate for prevalence studies is a function of expected prevalence and precision for a given level of confidence expressed by the z statistic. Choice of the appropriate values for these variables is sometimes not straight-forward. Certain other situations do not fulfil the assumptions made in the conventional equation and present a special challenge. These situations include, but are not limited to, smaller population size in relation to sample size, sampling technique or missing data. This paper discusses practical issues in sample size estimation for prevalence studies with an objective to help clinicians and healthcare researchers make more informed decisions whether reviewing or conducting such a study. PMID:22562262

  15. How to Show that Sample Size Matters

    ERIC Educational Resources Information Center

    Kozak, Marcin

    2009-01-01

    This article suggests how to explain a problem of small sample size when considering correlation between two Normal variables. Two techniques are shown: one based on graphs and the other on simulation. (Contains 3 figures and 1 table.)

  16. Sample sizes for confidence limits for reliability.

    SciTech Connect

    Darby, John L.

    2010-02-01

    We recently performed an evaluation of the implications of a reduced stockpile of nuclear weapons for surveillance to support estimates of reliability. We found that one technique developed at Sandia National Laboratories (SNL) under-estimates the required sample size for systems-level testing. For a large population the discrepancy is not important, but for a small population it is important. We found that another technique used by SNL provides the correct required sample size. For systems-level testing of nuclear weapons, samples are selected without replacement, and the hypergeometric probability distribution applies. Both of the SNL techniques focus on samples without defects from sampling without replacement. We generalized the second SNL technique to cases with defects in the sample. We created a computer program in Mathematica to automate the calculation of confidence for reliability. We also evaluated sampling with replacement where the binomial probability distribution applies.

  17. Experimental determination of size distributions: analyzing proper sample sizes

    NASA Astrophysics Data System (ADS)

    Buffo, A.; Alopaeus, V.

    2016-04-01

    The measurement of various particle size distributions is a crucial aspect for many applications in the process industry. Size distribution is often related to the final product quality, as in crystallization or polymerization. In other cases it is related to the correct evaluation of heat and mass transfer, as well as reaction rates, depending on the interfacial area between the different phases or to the assessment of yield stresses of polycrystalline metals/alloys samples. The experimental determination of such distributions often involves laborious sampling procedures and the statistical significance of the outcome is rarely investigated. In this work, we propose a novel rigorous tool, based on inferential statistics, to determine the number of samples needed to obtain reliable measurements of size distribution, according to specific requirements defined a priori. Such methodology can be adopted regardless of the measurement technique used.

  18. Sample size: how many patients are necessary?

    PubMed Central

    Fayers, P. M.; Machin, D.

    1995-01-01

    The need for sample size calculations is briefly reviewed: many of the arguments against small trials are already well known, and we only cursorily repeat them in passing. Problems that arise in the estimation of sample size are then discussed, with particular reference to survival studies. However, most of the issues which we discuss are equally applicable to other types of study. Finally, prognostic factor analysis designs are discussed, since this is another area in which experience shows that far too many studies are of an inadequate size and yield misleading results. PMID:7599035

  19. Sample size calculation in metabolic phenotyping studies.

    PubMed

    Billoir, Elise; Navratil, Vincent; Blaise, Benjamin J

    2015-09-01

    The number of samples needed to identify significant effects is a key question in biomedical studies, with consequences on experimental designs, costs and potential discoveries. In metabolic phenotyping studies, sample size determination remains a complex step. This is due particularly to the multiple hypothesis-testing framework and the top-down hypothesis-free approach, with no a priori known metabolic target. Until now, there was no standard procedure available to address this purpose. In this review, we discuss sample size estimation procedures for metabolic phenotyping studies. We release an automated implementation of the Data-driven Sample size Determination (DSD) algorithm for MATLAB and GNU Octave. Original research concerning DSD was published elsewhere. DSD allows the determination of an optimized sample size in metabolic phenotyping studies. The procedure uses analytical data only from a small pilot cohort to generate an expanded data set. The statistical recoupling of variables procedure is used to identify metabolic variables, and their intensity distributions are estimated by Kernel smoothing or log-normal density fitting. Statistically significant metabolic variations are evaluated using the Benjamini-Yekutieli correction and processed for data sets of various sizes. Optimal sample size determination is achieved in a context of biomarker discovery (at least one statistically significant variation) or metabolic exploration (a maximum of statistically significant variations). DSD toolbox is encoded in MATLAB R2008A (Mathworks, Natick, MA) for Kernel and log-normal estimates, and in GNU Octave for log-normal estimates (Kernel density estimates are not robust enough in GNU octave). It is available at http://www.prabi.fr/redmine/projects/dsd/repository, with a tutorial at http://www.prabi.fr/redmine/projects/dsd/wiki. PMID:25600654

  20. Improved sample size determination for attributes and variables sampling

    SciTech Connect

    Stirpe, D.; Picard, R.R.

    1985-01-01

    Earlier INMM papers have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, we have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed; and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments. 1 ref., 3 figs.

  1. A New Sample Size Formula for Regression.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.

    The focus of this research was to determine the efficacy of a new method of selecting sample sizes for multiple linear regression. A Monte Carlo simulation was used to study both empirical predictive power rates and empirical statistical power rates of the new method and seven other methods: those of C. N. Park and A. L. Dudycha (1974); J. Cohen…

  2. Exploratory Factor Analysis with Small Sample Sizes

    ERIC Educational Resources Information Center

    de Winter, J. C. F.; Dodou, D.; Wieringa, P. A.

    2009-01-01

    Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes ("N"), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for "N" below 50. Simulations were carried out to estimate the minimum required "N" for different…

  3. Statistical Analysis Techniques for Small Sample Sizes

    NASA Technical Reports Server (NTRS)

    Navard, S. E.

    1984-01-01

    The small sample sizes problem which is encountered when dealing with analysis of space-flight data is examined. Because of such a amount of data available, careful analyses are essential to extract the maximum amount of information with acceptable accuracy. Statistical analysis of small samples is described. The background material necessary for understanding statistical hypothesis testing is outlined and the various tests which can be done on small samples are explained. Emphasis is on the underlying assumptions of each test and on considerations needed to choose the most appropriate test for a given type of analysis.

  4. Sample size and optimal sample design in tuberculosis surveys

    PubMed Central

    Sánchez-Crespo, J. L.

    1967-01-01

    Tuberculosis surveys sponsored by the World Health Organization have been carried out in different communities during the last few years. Apart from the main epidemiological findings, these surveys have provided basic statistical data for use in the planning of future investigations. In this paper an attempt is made to determine the sample size desirable in future surveys that include one of the following examinations: tuberculin test, direct microscopy, and X-ray examination. The optimum cluster sizes are found to be 100-150 children under 5 years of age in the tuberculin test, at least 200 eligible persons in the examination for excretors of tubercle bacilli (direct microscopy) and at least 500 eligible persons in the examination for persons with radiological evidence of pulmonary tuberculosis (X-ray). Modifications of the optimum sample size in combined surveys are discussed. PMID:5300008

  5. Sample-size requirements for evaluating population size structure

    USGS Publications Warehouse

    Vokoun, J.C.; Rabeni, C.F.; Stanovick, J.S.

    2001-01-01

    A method with an accompanying computer program is described to estimate the number of individuals needed to construct a sample length-frequency with a given accuracy and precision. First, a reference length-frequency assumed to be accurate for a particular sampling gear and collection strategy was constructed. Bootstrap procedures created length-frequencies with increasing sample size that were randomly chosen from the reference data and then were compared with the reference length-frequency by calculating the mean squared difference. Outputs from two species collected with different gears and an artificial even length-frequency are used to describe the characteristics of the method. The relations between the number of individuals used to construct a length-frequency and the similarity to the reference length-frequency followed a negative exponential distribution and showed the importance of using 300-400 individuals whenever possible.

  6. Sample Size for Confidence Interval of Covariate-Adjusted Mean Difference

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven

    2010-01-01

    This article provides a way to determine adequate sample size for the confidence interval of covariate-adjusted mean difference in randomized experiments. The standard error of adjusted mean difference depends on covariate variance and balance, which are two unknown quantities at the stage of planning sample size. If covariate observations are…

  7. Effects of sample size on KERNEL home range estimates

    USGS Publications Warehouse

    Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.

    1999-01-01

    Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.

  8. Sample size matters: Investigating the optimal sample size for a logistic regression debris flow susceptibility model

    NASA Astrophysics Data System (ADS)

    Heckmann, Tobias; Gegg, Katharina; Becht, Michael

    2013-04-01

    Statistical approaches to landslide susceptibility modelling on the catchment and regional scale are used very frequently compared to heuristic and physically based approaches. In the present study, we deal with the problem of the optimal sample size for a logistic regression model. More specifically, a stepwise approach has been chosen in order to select those independent variables (from a number of derivatives of a digital elevation model and landcover data) that explain best the spatial distribution of debris flow initiation zones in two neighbouring central alpine catchments in Austria (used mutually for model calculation and validation). In order to minimise problems arising from spatial autocorrelation, we sample a single raster cell from each debris flow initiation zone within an inventory. In addition, as suggested by previous work using the "rare events logistic regression" approach, we take a sample of the remaining "non-event" raster cells. The recommendations given in the literature on the size of this sample appear to be motivated by practical considerations, e.g. the time and cost of acquiring data for non-event cases, which do not apply to the case of spatial data. In our study, we aim at finding empirically an "optimal" sample size in order to avoid two problems: First, a sample too large will violate the independent sample assumption as the independent variables are spatially autocorrelated; hence, a variogram analysis leads to a sample size threshold above which the average distance between sampled cells falls below the autocorrelation range of the independent variables. Second, if the sample is too small, repeated sampling will lead to very different results, i.e. the independent variables and hence the result of a single model calculation will be extremely dependent on the choice of non-event cells. Using a Monte-Carlo analysis with stepwise logistic regression, 1000 models are calculated for a wide range of sample sizes. For each sample size

  9. 40 CFR 80.127 - Sample size guidelines.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... attest engagement, the auditor shall sample relevant populations to which agreed-upon procedures will be... population; and (b) Sample size shall be determined using one of the following options: (1) Option 1. Determine the sample size using the following table: Sample Size, Based Upon Population Size No....

  10. (Sample) Size Matters! An Examination of Sample Size from the SPRINT Trial

    PubMed Central

    Bhandari, Mohit; Tornetta, Paul; Rampersad, Shelly-Ann; Sprague, Sheila; Heels-Ansdell, Diane; Sanders, David W.; Schemitsch, Emil H.; Swiontkowski, Marc; Walter, Stephen

    2012-01-01

    Introduction Inadequate sample size and power in randomized trials can result in misleading findings. This study demonstrates the effect of sample size in a large, clinical trial by evaluating the results of the SPRINT (Study to Prospectively evaluate Reamed Intramedullary Nails in Patients with Tibial fractures) trial as it progressed. Methods The SPRINT trial evaluated reamed versus unreamed nailing of the tibia in 1226 patients, as well as in open and closed fracture subgroups (N=400 and N=826, respectively). We analyzed the re-operation rates and relative risk comparing treatment groups at 50, 100 and then increments of 100 patients up to the final sample size. Results at various enrollments were compared to the final SPRINT findings. Results In the final analysis, there was a statistically significant decreased risk of re-operation with reamed nails for closed fractures (relative risk reduction 35%). Results for the first 35 patients enrolled suggested reamed nails increased the risk of reoperation in closed fractures by 165%. Only after 543 patients with closed fractures were enrolled did the results reflect the final advantage for reamed nails in this subgroup. Similarly, the trend towards an increased risk of re-operation for open fractures (23%) was not seen until 62 patients with open fractures were enrolled. Conclusions Our findings highlight the risk of conducting a trial with insufficient sample size and power. Such studies are not only at risk of missing true effects, but also of giving misleading results. Level of Evidence N/A PMID:23525086

  11. Sampling variability in estimates of flow characteristics in coarse-bed channels: Effects of sample size

    NASA Astrophysics Data System (ADS)

    Cienciala, Piotr; Hassan, Marwan A.

    2016-03-01

    Adequate description of hydraulic variables based on a sample of field measurements is challenging in coarse-bed streams, a consequence of high spatial heterogeneity in flow properties that arises due to the complexity of channel boundary. By applying a resampling procedure based on bootstrapping to an extensive field data set, we have estimated sampling variability and its relationship with sample size in relation to two common methods of representing flow characteristics, spatially averaged velocity profiles and fitted probability distributions. The coefficient of variation in bed shear stress and roughness length estimated from spatially averaged velocity profiles and in shape and scale parameters of gamma distribution fitted to local values of bed shear stress, velocity, and depth was high, reaching 15-20% of the parameter value even at the sample size of 100 (sampling density 1 m-2). We illustrated implications of these findings with two examples. First, sensitivity analysis of a 2-D hydrodynamic model to changes in roughness length parameter showed that the sampling variability range observed in our resampling procedure resulted in substantially different frequency distributions and spatial patterns of modeled hydraulic variables. Second, using a bedload formula, we showed that propagation of uncertainty in the parameters of a gamma distribution used to model bed shear stress led to the coefficient of variation in predicted transport rates exceeding 50%. Overall, our findings underscore the importance of reporting the precision of estimated hydraulic parameters. When such estimates serve as input into models, uncertainty propagation should be explicitly accounted for by running ensemble simulations.

  12. Optimal sample size allocation for Welch's test in one-way heteroscedastic ANOVA.

    PubMed

    Shieh, Gwowen; Jan, Show-Li

    2015-06-01

    The determination of an adequate sample size is a vital aspect in the planning stage of research studies. A prudent strategy should incorporate all of the critical factors and cost considerations into sample size calculations. This study concerns the allocation schemes of group sizes for Welch's test in a one-way heteroscedastic ANOVA. Optimal allocation approaches are presented for minimizing the total cost while maintaining adequate power and for maximizing power performance for a fixed cost. The commonly recommended ratio of sample sizes is proportional to the ratio of the population standard deviations or the ratio of the population standard deviations divided by the square root of the ratio of the unit sampling costs. Detailed numerical investigations have shown that these usual allocation methods generally do not give the optimal solution. The suggested procedures are illustrated using an example of the cost-efficiency evaluation in multidisciplinary pain centers.

  13. Are Sample Sizes Clear and Justified in RCTs Published in Dental Journals?

    PubMed Central

    Koletsi, Despina; Fleming, Padhraig S.; Seehra, Jadbinder; Bagos, Pantelis G.; Pandis, Nikolaos

    2014-01-01

    Sample size calculations are advocated by the CONSORT group to justify sample sizes in randomized controlled trials (RCTs). The aim of this study was primarily to evaluate the reporting of sample size calculations, to establish the accuracy of these calculations in dental RCTs and to explore potential predictors associated with adequate reporting. Electronic searching was undertaken in eight leading specific and general dental journals. Replication of sample size calculations was undertaken where possible. Assumed variances or odds for control and intervention groups were also compared against those observed. The relationship between parameters including journal type, number of authors, trial design, involvement of methodologist, single-/multi-center study and region and year of publication, and the accuracy of sample size reporting was assessed using univariable and multivariable logistic regression. Of 413 RCTs identified, sufficient information to allow replication of sample size calculations was provided in only 121 studies (29.3%). Recalculations demonstrated an overall median overestimation of sample size of 15.2% after provisions for losses to follow-up. There was evidence that journal, methodologist involvement (OR = 1.97, CI: 1.10, 3.53), multi-center settings (OR = 1.86, CI: 1.01, 3.43) and time since publication (OR = 1.24, CI: 1.12, 1.38) were significant predictors of adequate description of sample size assumptions. Among journals JCP had the highest odds of adequately reporting sufficient data to permit sample size recalculation, followed by AJODO and JDR, with 61% (OR = 0.39, CI: 0.19, 0.80) and 66% (OR = 0.34, CI: 0.15, 0.75) lower odds, respectively. Both assumed variances and odds were found to underestimate the observed values. Presentation of sample size calculations in the dental literature is suboptimal; incorrect assumptions may have a bearing on the power of RCTs. PMID:24465806

  14. Public Opinion Polls, Chicken Soup and Sample Size

    ERIC Educational Resources Information Center

    Nguyen, Phung

    2005-01-01

    Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.

  15. Analysis of the adequate size of a cord blood bank and comparison of HLA haplotype distributions between four populations.

    PubMed

    Haimila, Katri; Penttilä, Antti; Arvola, Anne; Auvinen, Marja-Kaisa; Korhonen, Matti

    2013-02-01

    The number of units and especially the number of different HLA haplotypes present in a cord blood (CB) bank is a crucial determinant of its usefulness. We generated data relevant to the development of our national CB in Finland. The HLA haplotype distribution was examined between specific populations. We developed graphical ways of data presentation that enable easy visualization of differences. First, we estimated the optimal size of a CB bank for Finland and found that approximately 1700 units are needed to provide a 5/6 HLA-matched donor for 80% of Finnish patients. Secondly, we evaluated HLA haplotype distributions between four locations, Finland, Japan, Sweden and Belgium. Our results showed that the Japanese Tokyo Cord Blood Bank differs in both the frequency and distribution of haplotypes from the European banks. The European banks (Finnish Cord Blood Registry, The Swedish National Cord Blood Bank, and Marrow Donor Program-Belgium) have similar frequencies of common haplotypes, but 26% of the haplotypes in the Finnish CB bank are unique, which justifies the existence of a national bank. The tendency to a homogenous HLA haplotype distribution in banks underlines the need for targeting recruitment at the poorly represented minority populations.

  16. 7 CFR 52.3757 - Standard sample unit size.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Ripe Olives 1 Product Description, Types, Styles, and Grades § 52.3757 Standard sample unit size... following standard sample unit size for the applicable style: (a) Whole and pitted—50 olives. (b)...

  17. 7 CFR 52.3757 - Standard sample unit size.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Ripe Olives 1 Product Description, Types, Styles, and Grades § 52.3757 Standard sample unit size... following standard sample unit size for the applicable style: (a) Whole and pitted—50 olives. (b)...

  18. Considerations when calculating the sample size for an inequality test

    PubMed Central

    2016-01-01

    Click here for Korean Translation. Calculating the sample size is a vital step during the planning of a study in order to ensure the desired power for detecting clinically meaningful differences. However, estimating the sample size is not always straightforward. A number of key components should be considered to calculate a suitable sample size. In this paper, general considerations for conducting sample size calculations for inequality tests are summarized. PMID:27482308

  19. The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education

    ERIC Educational Resources Information Center

    Slavin, Robert; Smith, Dewi

    2009-01-01

    Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…

  20. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999385

  1. Fluorescent in situ hybridization with specific DNA probes offers adequate detection of Enterococcus faecalis and Enterococcus faecium in clinical samples.

    PubMed

    Waar, Karola; Degener, John E; van Luyn, Marja J; Harmsen, Hermie J M

    2005-10-01

    Enterococcus faecalis and Enterococcus faecium are among the leading causes of hospital-acquired infections. Reliable and quick identification of E. faecalis and E. faecium is important for accurate treatment and understanding their role in the pathogenesis of infections. Fluorescent in situ hybridization (FISH) of whole bacterial cells with oligonucleotides targeted at the 16S rRNA molecule leads to a reduced time to identification. In clinical practice, FISH therefore can be used in situations in which quick identification is necessary for optimal treatment of the patient. Furthermore, the abundance, spatial distribution and bacterial cell morphology can be observed in situ. This report describes the design of two fluorescent-labelled oligonucleotides that, respectively, detect the 16S rRNA of E. faecalis and the 16S rRNA of E. faecium, Enterococcus hirae, Enterococcus mundtii, Enterococcus villorum and Enterococcus saccharolyticus. Different protocols for the application of these oligonucleotides with FISH in different clinical samples such as faeces or blood cultures are given. Enterococci in a biofilm attached to a biomaterial were also visualized. Embedding of the biomaterial preserved the morphology and therefore the architecture of the biofilm could be observed. The usefulness of other studies describing FISH for detection of enterococci is generally hampered by the fact that they have only focused on one material and one protocol to detect the enterococci. However, the results of this study show that the probes can be used both in the routine laboratory to detect and determine the enterococcal species in different clinical samples and in a research setting to enumerate and detect the enterococci in their physical environment.

  2. 7 CFR 52.803 - Sample unit size.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... following sample unit sizes for the applicable factor: (a) Pits, character, and harmless extraneous material—20 ounces of drained cherries. (b) Size, color, and defects (other than harmless extraneous...

  3. 7 CFR 52.775 - Sample unit size.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... unit size. Compliance with requirements for the size and the various quality factors is based on the... extraneous material—The total contents of each container in the sample. Factors of Quality...

  4. A computer program for sample size computations for banding studies

    USGS Publications Warehouse

    Wilson, K.R.; Nichols, J.D.; Hines, J.E.

    1989-01-01

    Sample sizes necessary for estimating survival rates of banded birds, adults and young, are derived based on specified levels of precision. The banding study can be new or ongoing. The desired coefficient of variation (CV) for annual survival estimates, the CV for mean annual survival estimates, and the length of the study must be specified to compute sample sizes. A computer program is available for computation of the sample sizes, and a description of the input and output is provided.

  5. A review of software for sample size determination.

    PubMed

    Dattalo, Patrick

    2009-09-01

    The size of a sample is an important element in determining the statistical precision with which population values can be estimated. This article identifies and describes free and commercial programs for sample size determination. Programs are categorized as follows: (a) multiple procedure for sample size determination; (b) single procedure for sample size determination; and (c) Web-based. Programs are described in terms of (a) cost; (b) ease of use, including interface, operating system and hardware requirements, and availability of documentation and technical support; (c) file management, including input and output formats; and (d) analytical and graphical capabilities. PMID:19696082

  6. 40 CFR 80.127 - Sample size guidelines.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements § 80.127 Sample size guidelines. In performing the attest engagement, the auditor shall sample relevant populations to which agreed-upon procedures will...

  7. 7 CFR 52.775 - Sample unit size.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... United States Standards for Grades of Canned Red Tart Pitted Cherries 1 Sample Unit Size § 52.775 Sample... drained cherries. (b) Defects (other than harmless extraneous material)—100 cherries. (c)...

  8. Sampling strategy in molecular microbial ecology: influence of soil sample size on DNA fingerprinting analysis of fungal and bacterial communities.

    PubMed

    Ranjard, Lionel; Lejon, David P H; Mougel, Christophe; Schehrer, Lucie; Merdinoglu, Didier; Chaussod, Rémi

    2003-11-01

    Assessing soil microbial community structure by the use of molecular techniques requires a satisfactory sampling strategy that takes into account the high microbial diversity and the heterogeneous distribution of microorganisms in the soil matrix. The influence of the sample size of three different soil types (sand, silt and clay soils) on the DNA yield and analysis of bacterial and fungal community structure were investigated. Six sample sizes from 0.125 g to 4 g were evaluated. The genetic community structure was assessed by automated ribosomal intergenic spacer analysis (A-RISA fingerprint). Variations between bacterial (B-ARISA) and fungal (F-ARISA) community structure were quantified by using principal component analysis (PCA). DNA yields were positively correlated with the sample size for the sandy and silty soils, suggesting an influence of the sample size on DNA recovery, whereas no correlation was observed in the clay soil. B-ARISA was shown to be consistent between the different sample sizes for each soil type indicating that the sampling procedure has no influence on the assessment of bacterial community structure. On the contrary for F-ARISA profiles, strong variations were observed between replicates of the smaller samples (<1 g). Principal component analysis analysis revealed that sampling aliquots of soil > or =1 g are required to obtain robust and reproducible fingerprinting analysis of the genetic structure of fungal communities. However, the smallest samples could be adequate for the detection of minor populations masked by dominant ones in larger samples. The sampling strategy should therefore be different according to the objectives: rather large soil samples (> or =1 g) for a global description of the genetic community structure, or a large number of small soil samples for a more complete inventory of microbial diversity.

  9. Sample Size and Bentler and Bonett's Nonnormed Fit Index.

    ERIC Educational Resources Information Center

    Bollen, Kenneth A.

    1986-01-01

    This note shows that, contrary to what has been claimed, Bentler and Bonnett's nonnormed fit index is dependent on sample size. Specifically for a constant value of a fitting function, the nonnormed index is inversely related to sample size. A simple alternative fit measure is proposed that removes this dependency. (Author/LMO)

  10. Preliminary Proactive Sample Size Determination for Confirmatory Factor Analysis Models

    ERIC Educational Resources Information Center

    Koran, Jennifer

    2016-01-01

    Proactive preliminary minimum sample size determination can be useful for the early planning stages of a latent variable modeling study to set a realistic scope, long before the model and population are finalized. This study examined existing methods and proposed a new method for proactive preliminary minimum sample size determination.

  11. Effects of Calibration Sample Size and Item Bank Size on Ability Estimation in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Sahin, Alper; Weiss, David J.

    2015-01-01

    This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…

  12. Power Analysis and Sample Size Determination in Metabolic Phenotyping.

    PubMed

    Blaise, Benjamin J; Correia, Gonçalo; Tin, Adrienne; Young, J Hunter; Vergnaud, Anne-Claire; Lewis, Matthew; Pearce, Jake T M; Elliott, Paul; Nicholson, Jeremy K; Holmes, Elaine; Ebbels, Timothy M D

    2016-05-17

    Estimation of statistical power and sample size is a key aspect of experimental design. However, in metabolic phenotyping, there is currently no accepted approach for these tasks, in large part due to the unknown nature of the expected effect. In such hypothesis free science, neither the number or class of important analytes nor the effect size are known a priori. We introduce a new approach, based on multivariate simulation, which deals effectively with the highly correlated structure and high-dimensionality of metabolic phenotyping data. First, a large data set is simulated based on the characteristics of a pilot study investigating a given biomedical issue. An effect of a given size, corresponding either to a discrete (classification) or continuous (regression) outcome is then added. Different sample sizes are modeled by randomly selecting data sets of various sizes from the simulated data. We investigate different methods for effect detection, including univariate and multivariate techniques. Our framework allows us to investigate the complex relationship between sample size, power, and effect size for real multivariate data sets. For instance, we demonstrate for an example pilot data set that certain features achieve a power of 0.8 for a sample size of 20 samples or that a cross-validated predictivity QY(2) of 0.8 is reached with an effect size of 0.2 and 200 samples. We exemplify the approach for both nuclear magnetic resonance and liquid chromatography-mass spectrometry data from humans and the model organism C. elegans.

  13. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    PubMed

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  14. Sample Size Requirements for Comparing Two Alpha Coefficients.

    ERIC Educational Resources Information Center

    Bonnett, Douglas G.

    2003-01-01

    Derived general formulas to determine the sample size requirements for hypothesis testing with desired power and interval estimation with desired precision. Illustrated the approach with the example of a screening test for adolescent attention deficit disorder. (SLD)

  15. Effects of Mesh Size on Sieved Samples of Corophium volutator

    NASA Astrophysics Data System (ADS)

    Crewe, Tara L.; Hamilton, Diana J.; Diamond, Antony W.

    2001-08-01

    Corophium volutator (Pallas), gammaridean amphipods found on intertidal mudflats, are frequently collected in mud samples sieved on mesh screens. However, mesh sizes used vary greatly among studies, raising the possibility that sampling methods bias results. The effect of using different mesh sizes on the resulting size-frequency distributions of Corophium was tested by collecting Corophium from mud samples with 0·5 and 0·25 mm sieves. More than 90% of Corophium less than 2 mm long passed through the larger sieve. A significantly smaller, but still substantial, proportion of 2-2·9 mm Corophium (30%) was also lost. Larger size classes were unaffected by mesh size. Mesh size significantly changed the observed size-frequency distribution of Corophium, and effects varied with sampling date. It is concluded that a 0·5 mm sieve is suitable for studies concentrating on adults, but to accurately estimate Corophium density and size-frequency distributions, a 0·25 mm sieve must be used.

  16. The Precision Efficacy Analysis for Regression Sample Size Method.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.

    The general purpose of this study was to examine the efficiency of the Precision Efficacy Analysis for Regression (PEAR) method for choosing appropriate sample sizes in regression studies used for precision. The PEAR method, which is based on the algebraic manipulation of an accepted cross-validity formula, essentially uses an effect size to…

  17. Sample Size Determination: A Comparison of Attribute, Continuous Variable, and Cell Size Methods.

    ERIC Educational Resources Information Center

    Clark, Philip M.

    1984-01-01

    Describes three methods of sample size determination, each having its use in investigation of social science problems: Attribute method; Continuous Variable method; Galtung's Cell Size method. Statistical generalization, benefits of cell size method (ease of use, trivariate analysis and trichotyomized variables), and choice of method are…

  18. The Effect of Sample Size on Latent Growth Models.

    ERIC Educational Resources Information Center

    Hamilton, Jennifer; Gagne, Phillip E.; Hancock, Gregory R.

    A Monte Carlo simulation approach was taken to investigate the effect of sample size on a variety of latent growth models. A fully balanced experimental design was implemented, with samples drawn from multivariate normal populations specified to represent 12 unique growth models. The models varied factorially by crossing number of time points,…

  19. Uncertainty of the sample size reduction step in pesticide residue analysis of large-sized crops.

    PubMed

    Omeroglu, P Yolci; Ambrus, Á; Boyacioglu, D; Majzik, E Solymosne

    2013-01-01

    To estimate the uncertainty of the sample size reduction step, each unit in laboratory samples of papaya and cucumber was cut into four segments in longitudinal directions and two opposite segments were selected for further homogenisation while the other two were discarded. Jackfruit was cut into six segments in longitudinal directions, and all segments were kept for further analysis. To determine the pesticide residue concentrations in each segment, they were individually homogenised and analysed by chromatographic methods. One segment from each unit of the laboratory sample was drawn randomly to obtain 50 theoretical sub-samples with an MS Office Excel macro. The residue concentrations in a sub-sample were calculated from the weight of segments and the corresponding residue concentration. The coefficient of variation calculated from the residue concentrations of 50 sub-samples gave the relative uncertainty resulting from the sample size reduction step. The sample size reduction step, which is performed by selecting one longitudinal segment from each unit of the laboratory sample, resulted in relative uncertainties of 17% and 21% for field-treated jackfruits and cucumber, respectively, and 7% for post-harvest treated papaya. The results demonstrated that sample size reduction is an inevitable source of uncertainty in pesticide residue analysis of large-sized crops. The post-harvest treatment resulted in a lower variability because the dipping process leads to a more uniform residue concentration on the surface of the crops than does the foliar application of pesticides.

  20. Two-stage chain sampling inspection plans with different sample sizes in the two stages

    NASA Technical Reports Server (NTRS)

    Stephens, K. S.; Dodge, H. F.

    1976-01-01

    A further generalization of the family of 'two-stage' chain sampling inspection plans is developed - viz, the use of different sample sizes in the two stages. Evaluation of the operating characteristics is accomplished by the Markov chain approach of the earlier work, modified to account for the different sample sizes. Markov chains for a number of plans are illustrated and several algebraic solutions are developed. Since these plans involve a variable amount of sampling, an evaluation of the average sampling number (ASN) is developed. A number of OC curves and ASN curves are presented. Some comparisons with plans having only one sample size are presented and indicate that improved discrimination is achieved by the two-sample-size plans.

  1. Sample size calculation for the proportional hazards cure model.

    PubMed

    Wang, Songfeng; Zhang, Jiajia; Lu, Wenbin

    2012-12-20

    In clinical trials with time-to-event endpoints, it is not uncommon to see a significant proportion of patients being cured (or long-term survivors), such as trials for the non-Hodgkins lymphoma disease. The popularly used sample size formula derived under the proportional hazards (PH) model may not be proper to design a survival trial with a cure fraction, because the PH model assumption may be violated. To account for a cure fraction, the PH cure model is widely used in practice, where a PH model is used for survival times of uncured patients and a logistic distribution is used for the probability of patients being cured. In this paper, we develop a sample size formula on the basis of the PH cure model by investigating the asymptotic distributions of the standard weighted log-rank statistics under the null and local alternative hypotheses. The derived sample size formula under the PH cure model is more flexible because it can be used to test the differences in the short-term survival and/or cure fraction. Furthermore, we also investigate as numerical examples the impacts of accrual methods and durations of accrual and follow-up periods on sample size calculation. The results show that ignoring the cure rate in sample size calculation can lead to either underpowered or overpowered studies. We evaluate the performance of the proposed formula by simulation studies and provide an example to illustrate its application with the use of data from a melanoma trial. PMID:22786805

  2. Sample Size Determination for One- and Two-Sample Trimmed Mean Tests

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng

    2008-01-01

    Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…

  3. Calculating Sample Size in Trials Using Historical Controls

    PubMed Central

    Zhang, Song; Cao, Jing; Ahn, Chul

    2011-01-01

    Background Makuch and Simon [1] developed a sample size formula for historical control trials. When assessing power, they assumed the true control treatment effect to be equal to the observed effect from the historical control group. Many researchers have pointed out that the M-S approach does not preserve the nominal power and type I error when considering the uncertainty in the true historical control treatment effect. Purpose To develop a sample size formula that properly accounts for the underlying randomness in the observations from the historical control group. Methods We reveal the extremely skewed nature in the distributions of power and type I error, obtained over all the random realizations of the historical control data. The skewness motivates us to derive a sample size formula that controls the percentiles, instead of the means, of the power and type I error. Results A closed-form sample size formula is developed to control arbitrary percentiles of power and type I error for historical control trials. A simulation study further demonstrates that this approach preserves the operational characteristics in a more realistic scenario where the population variances are unknown and replaced by sample variances. Limitations The closed-form sample size formula is derived for continuous outcomes. The formula is more complicated for binary or survival time outcomes. Conclusions We have derived a closed-form sample size formula that controls the percentiles instead of means of power and type I error in historical control trials, which have extremely skewed distributions over all the possible realizations of historical control data. PMID:20573638

  4. Sample size in psychological research over the past 30 years.

    PubMed

    Marszalek, Jacob M; Barber, Carolyn; Kohlhart, Julie; Holmes, Cooper B

    2011-04-01

    The American Psychological Association (APA) Task Force on Statistical Inference was formed in 1996 in response to a growing body of research demonstrating methodological issues that threatened the credibility of psychological research, and made recommendations to address them. One issue was the small, even dramatically inadequate, size of samples used in studies published by leading journals. The present study assessed the progress made since the Task Force's final report in 1999. Sample sizes reported in four leading APA journals in 1955, 1977, 1995, and 2006 were compared using nonparametric statistics, while data from the last two waves were fit to a hierarchical generalized linear growth model for more in-depth analysis. Overall, results indicate that the recommendations for increasing sample sizes have not been integrated in core psychological research, although results slightly vary by field. This and other implications are discussed in the context of current methodological critique and practice.

  5. On random sample size, ignorability, ancillarity, completeness, separability, and degeneracy: sequential trials, random sample sizes, and missing data.

    PubMed

    Molenberghs, Geert; Kenward, Michael G; Aerts, Marc; Verbeke, Geert; Tsiatis, Anastasios A; Davidian, Marie; Rizopoulos, Dimitris

    2014-02-01

    The vast majority of settings for which frequentist statistical properties are derived assume a fixed, a priori known sample size. Familiar properties then follow, such as, for example, the consistency, asymptotic normality, and efficiency of the sample average for the mean parameter, under a wide range of conditions. We are concerned here with the alternative situation in which the sample size is itself a random variable which may depend on the data being collected. Further, the rule governing this may be deterministic or probabilistic. There are many important practical examples of such settings, including missing data, sequential trials, and informative cluster size. It is well known that special issues can arise when evaluating the properties of statistical procedures under such sampling schemes, and much has been written about specific areas (Grambsch P. Sequential sampling based on the observed Fisher information to guarantee the accuracy of the maximum likelihood estimator. Ann Stat 1983; 11: 68-77; Barndorff-Nielsen O and Cox DR. The effect of sampling rules on likelihood statistics. Int Stat Rev 1984; 52: 309-326). Our aim is to place these various related examples into a single framework derived from the joint modeling of the outcomes and sampling process and so derive generic results that in turn provide insight, and in some cases practical consequences, for different settings. It is shown that, even in the simplest case of estimating a mean, some of the results appear counterintuitive. In many examples, the sample average may exhibit small sample bias and, even when it is unbiased, may not be optimal. Indeed, there may be no minimum variance unbiased estimator for the mean. Such results follow directly from key attributes such as non-ancillarity of the sample size and incompleteness of the minimal sufficient statistic of the sample size and sample sum. Although our results have direct and obvious implications for estimation following group sequential

  6. Sample size considerations for clinical research studies in nuclear cardiology.

    PubMed

    Chiuzan, Cody; West, Erin A; Duong, Jimmy; Cheung, Ken Y K; Einstein, Andrew J

    2015-12-01

    Sample size calculation is an important element of research design that investigators need to consider in the planning stage of the study. Funding agencies and research review panels request a power analysis, for example, to determine the minimum number of subjects needed for an experiment to be informative. Calculating the right sample size is crucial to gaining accurate information and ensures that research resources are used efficiently and ethically. The simple question "How many subjects do I need?" does not always have a simple answer. Before calculating the sample size requirements, a researcher must address several aspects, such as purpose of the research (descriptive or comparative), type of samples (one or more groups), and data being collected (continuous or categorical). In this article, we describe some of the most frequent methods for calculating the sample size with examples from nuclear cardiology research, including for t tests, analysis of variance (ANOVA), non-parametric tests, correlation, Chi-squared tests, and survival analysis. For the ease of implementation, several examples are also illustrated via user-friendly free statistical software.

  7. Sample size considerations for clinical research studies in nuclear cardiology.

    PubMed

    Chiuzan, Cody; West, Erin A; Duong, Jimmy; Cheung, Ken Y K; Einstein, Andrew J

    2015-12-01

    Sample size calculation is an important element of research design that investigators need to consider in the planning stage of the study. Funding agencies and research review panels request a power analysis, for example, to determine the minimum number of subjects needed for an experiment to be informative. Calculating the right sample size is crucial to gaining accurate information and ensures that research resources are used efficiently and ethically. The simple question "How many subjects do I need?" does not always have a simple answer. Before calculating the sample size requirements, a researcher must address several aspects, such as purpose of the research (descriptive or comparative), type of samples (one or more groups), and data being collected (continuous or categorical). In this article, we describe some of the most frequent methods for calculating the sample size with examples from nuclear cardiology research, including for t tests, analysis of variance (ANOVA), non-parametric tests, correlation, Chi-squared tests, and survival analysis. For the ease of implementation, several examples are also illustrated via user-friendly free statistical software. PMID:26403142

  8. Sample Size Calculations for Precise Interval Estimation of the Eta-Squared Effect Size

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2015-01-01

    Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…

  9. Sampling benthic macroinvertebrates in a large flood-plain river: Considerations of study design, sample size, and cost

    USGS Publications Warehouse

    Bartsch, L.A.; Richardson, W.B.; Naimo, T.J.

    1998-01-01

    Estimation of benthic macroinvertebrate populations over large spatial scales is difficult due to the high variability in abundance and the cost of sample processing and taxonomic analysis. To determine a cost-effective, statistically powerful sample design, we conducted an exploratory study of the spatial variation of benthic macroinvertebrates in a 37 km reach of the Upper Mississippi River. We sampled benthos at 36 sites within each of two strata, contiguous backwater and channel border. Three standard ponar (525 cm(2)) grab samples were obtained at each site ('Original Design'). Analysis of variance and sampling cost of strata-wide estimates for abundance of Oligochaeta, Chironomidae, and total invertebrates showed that only one ponar sample per site ('Reduced Design') yielded essentially the same abundance estimates as the Original Design, while reducing the overall cost by 63%. A posteriori statistical power analysis (alpha = 0.05, beta = 0.20) on the Reduced Design estimated that at least 18 sites per stratum were needed to detect differences in mean abundance between contiguous backwater and channel border areas for Oligochaeta, Chironomidae, and total invertebrates. Statistical power was nearly identical for the three taxonomic groups. The abundances of several taxa of concern (e.g., Hexagenia mayflies and Musculium fingernail clams) were too spatially variable to estimate power with our method. Resampling simulations indicated that to achieve adequate sampling precision for Oligochaeta, at least 36 sample sites per stratum would be required, whereas a sampling precision of 0.2 would not be attained with any sample size for Hexagenia in channel border areas, or Chironomidae and Musculium in both strata given the variance structure of the original samples. Community-wide diversity indices (Brillouin and 1-Simpsons) increased as sample area per site increased. The backwater area had higher diversity than the channel border area. The number of sampling sites

  10. Approximate sample sizes required to estimate length distributions

    USGS Publications Warehouse

    Miranda, L.E.

    2007-01-01

    The sample sizes required to estimate fish length were determined by bootstrapping from reference length distributions. Depending on population characteristics and species-specific maximum lengths, 1-cm length-frequency histograms required 375-1,200 fish to estimate within 10% with 80% confidence, 2.5-cm histograms required 150-425 fish, proportional stock density required 75-140 fish, and mean length required 75-160 fish. In general, smaller species, smaller populations, populations with higher mortality, and simpler length statistics required fewer samples. Indices that require low sample sizes may be suitable for monitoring population status, and when large changes in length are evident, additional sampling effort may be allocated to more precisely define length status with more informative estimators. ?? Copyright by the American Fisheries Society 2007.

  11. An Investigation of Sample Size Splitting on ATFIND and DIMTEST

    ERIC Educational Resources Information Center

    Socha, Alan; DeMars, Christine E.

    2013-01-01

    Modeling multidimensional test data with a unidimensional model can result in serious statistical errors, such as bias in item parameter estimates. Many methods exist for assessing the dimensionality of a test. The current study focused on DIMTEST. Using simulated data, the effects of sample size splitting for use with the ATFIND procedure for…

  12. The Fisher-Yates Exact Test and Unequal Sample Sizes

    ERIC Educational Resources Information Center

    Johnson, Edgar M.

    1972-01-01

    A computational short cut suggested by Feldman and Klinger for the one-sided Fisher-Yates exact test is clarified and is extended to the calculation of probability values for certain two-sided tests when sample sizes are unequal. (Author)

  13. Sample Size Bias in Judgments of Perceptual Averages

    ERIC Educational Resources Information Center

    Price, Paul C.; Kimura, Nicole M.; Smith, Andrew R.; Marshall, Lindsay D.

    2014-01-01

    Previous research has shown that people exhibit a sample size bias when judging the average of a set of stimuli on a single dimension. The more stimuli there are in the set, the greater people judge the average to be. This effect has been demonstrated reliably for judgments of the average likelihood that groups of people will experience negative,…

  14. Sample size calculation for meta-epidemiological studies.

    PubMed

    Giraudeau, Bruno; Higgins, Julian P T; Tavernier, Elsa; Trinquart, Ludovic

    2016-01-30

    Meta-epidemiological studies are used to compare treatment effect estimates between randomized clinical trials with and without a characteristic of interest. To our knowledge, there is presently nothing to help researchers to a priori specify the required number of meta-analyses to be included in a meta-epidemiological study. We derived a theoretical power function and sample size formula in the framework of a hierarchical model that allows for variation in the impact of the characteristic between trials within a meta-analysis and between meta-analyses. A simulation study revealed that the theoretical function overestimated power (because of the assumption of equal weights for each trial within and between meta-analyses). We also propose a simulation approach that allows for relaxing the constraints used in the theoretical approach and is more accurate. We illustrate that the two variables that mostly influence power are the number of trials per meta-analysis and the proportion of trials with the characteristic of interest. We derived a closed-form power function and sample size formula for estimating the impact of trial characteristics in meta-epidemiological studies. Our analytical results can be used as a 'rule of thumb' for sample size calculation for a meta-epidemiologic study. A more accurate sample size can be derived with a simulation study.

  15. Sample Size Tables, "t" Test, and a Prevalent Psychometric Distribution.

    ERIC Educational Resources Information Center

    Sawilowsky, Shlomo S.; Hillman, Stephen B.

    Psychology studies often have low statistical power. Sample size tables, as given by J. Cohen (1988), may be used to increase power, but they are based on Monte Carlo studies of relatively "tame" mathematical distributions, as compared to psychology data sets. In this study, Monte Carlo methods were used to investigate Type I and Type II error…

  16. Small Sample Sizes Yield Biased Allometric Equations in Temperate Forests

    PubMed Central

    Duncanson, L.; Rourke, O.; Dubayah, R.

    2015-01-01

    Accurate quantification of forest carbon stocks is required for constraining the global carbon cycle and its impacts on climate. The accuracies of forest biomass maps are inherently dependent on the accuracy of the field biomass estimates used to calibrate models, which are generated with allometric equations. Here, we provide a quantitative assessment of the sensitivity of allometric parameters to sample size in temperate forests, focusing on the allometric relationship between tree height and crown radius. We use LiDAR remote sensing to isolate between 10,000 to more than 1,000,000 tree height and crown radius measurements per site in six U.S. forests. We find that fitted allometric parameters are highly sensitive to sample size, producing systematic overestimates of height. We extend our analysis to biomass through the application of empirical relationships from the literature, and show that given the small sample sizes used in common allometric equations for biomass, the average site-level biomass bias is ~+70% with a standard deviation of 71%, ranging from −4% to +193%. These findings underscore the importance of increasing the sample sizes used for allometric equation generation. PMID:26598233

  17. Evaluation of morphological representative sample sizes for nanolayered polymer blends.

    PubMed

    Bironeau, A; Dirrenberger, J; Sollogoub, C; Miquelard-Garnier, G; Roland, S

    2016-10-01

    The size of representative microstructural samples obtained from atomic force microscopy is addressed in this paper. The case of an archetypal one-dimensional nanolayered polymer blend is considered. Image analysis is performed on micrographs obtained through atomic force microscopy, yielding statistical data concerning morphological properties of the material. The variability in terms of microstructural morphology is due to the thermomechanical processing route. The statistical data is used in order to estimate sample size representativity, based on an asymptotic relationship relating the inherent point variance of the indicator function of one material phase to the statistical, size-dependent, ensemble variance of the same function. From the study of nanolayered material systems, the statistical approach was found to be an effective mean for discriminating and characterizing multiple scales of heterogeneity.

  18. Practical Consideration of Genotype Imputation: Sample Size, Window Size, Reference Choice, and Untyped Rate

    PubMed Central

    Zhang, Boshao; Zhi, Degui; Zhang, Kui; Gao, Guimin; Limdi, Nita N.; Liu, Nianjun

    2011-01-01

    Imputation offers a promising way to infer the missing and/or untyped genotypes in genetic studies. In practice, however, many factors may affect the quality of imputation. In this study, we evaluated the influence of untyped rate, sizes of the study sample and the reference sample, window size, and reference choice (for admixed population), as the factors affecting the quality of imputation. The results show that in order to obtain good imputation quality, it is necessary to have an untyped rate less than 50%, a reference sample size greater than 50, and a window size of greater than 500 SNPs (roughly 1 MB in base pairs). Compared with the whole-region imputation, piecewise imputation with large-enough window sizes provides improved efficacy. For an admixed study sample, if only an external reference panel is used, it should include samples from the ancestral populations that represent the admixed population under investigation. Internal references are strongly recommended. When internal references are limited, however, augmentation by external references should be used carefully. More specifically, augmentation with samples from the major source populations of the admixture can lower the quality of imputation; augmentation with seemingly genetically unrelated cohorts may improve the quality of imputation. PMID:22308193

  19. Sample size consideration for immunoassay screening cut-point determination.

    PubMed

    Zhang, Jianchun; Zhang, Lanju; Yang, Harry

    2014-01-01

    Past decades have seen a rapid growth of biopharmaceutical products on the market. The administration of such large molecules can generate antidrug antibodies that can induce unwanted immune reactions in the recipients. Assessment of immunogenicity is required by regulatory agencies in clinical and nonclinical development, and this demands a well-validated assay. One of the important performance characteristics during assay validation is the cut point, which serves as a threshold between positive and negative samples. To precisely determine the cut point, a sufficiently large data set is often needed. However, there is no guideline other than some rule-of-thumb recommendations for sample size requirement in immunoassays. In this article, we propose a systematic approach to sample size determination for immunoassays and provide tables that facilitate its applications by scientists.

  20. Detecting Neuroimaging Biomarkers for Psychiatric Disorders: Sample Size Matters

    PubMed Central

    Schnack, Hugo G.; Kahn, René S.

    2016-01-01

    In a recent review, it was suggested that much larger cohorts are needed to prove the diagnostic value of neuroimaging biomarkers in psychiatry. While within a sample, an increase of diagnostic accuracy of schizophrenia (SZ) with number of subjects (N) has been shown, the relationship between N and accuracy is completely different between studies. Using data from a recent meta-analysis of machine learning (ML) in imaging SZ, we found that while low-N studies can reach 90% and higher accuracy, above N/2 = 50 the maximum accuracy achieved steadily drops to below 70% for N/2 > 150. We investigate the role N plays in the wide variability in accuracy results in SZ studies (63–97%). We hypothesize that the underlying cause of the decrease in accuracy with increasing N is sample heterogeneity. While smaller studies more easily include a homogeneous group of subjects (strict inclusion criteria are easily met; subjects live close to study site), larger studies inevitably need to relax the criteria/recruit from large geographic areas. A SZ prediction model based on a heterogeneous group of patients with presumably a heterogeneous pattern of structural or functional brain changes will not be able to capture the whole variety of changes, thus being limited to patterns shared by most patients. In addition to heterogeneity (sample size), we investigate other factors influencing accuracy and introduce a ML effect size. We derive a simple model of how the different factors, such as sample heterogeneity and study setup determine this ML effect size, and explain the variation in prediction accuracies found from the literature, both in cross-validation and independent sample testing. From this, we argue that smaller-N studies may reach high prediction accuracy at the cost of lower generalizability to other samples. Higher-N studies, on the other hand, will have more generalization power, but at the cost of lower accuracy. In conclusion, when comparing results from different

  1. Detecting Neuroimaging Biomarkers for Psychiatric Disorders: Sample Size Matters.

    PubMed

    Schnack, Hugo G; Kahn, René S

    2016-01-01

    In a recent review, it was suggested that much larger cohorts are needed to prove the diagnostic value of neuroimaging biomarkers in psychiatry. While within a sample, an increase of diagnostic accuracy of schizophrenia (SZ) with number of subjects (N) has been shown, the relationship between N and accuracy is completely different between studies. Using data from a recent meta-analysis of machine learning (ML) in imaging SZ, we found that while low-N studies can reach 90% and higher accuracy, above N/2 = 50 the maximum accuracy achieved steadily drops to below 70% for N/2 > 150. We investigate the role N plays in the wide variability in accuracy results in SZ studies (63-97%). We hypothesize that the underlying cause of the decrease in accuracy with increasing N is sample heterogeneity. While smaller studies more easily include a homogeneous group of subjects (strict inclusion criteria are easily met; subjects live close to study site), larger studies inevitably need to relax the criteria/recruit from large geographic areas. A SZ prediction model based on a heterogeneous group of patients with presumably a heterogeneous pattern of structural or functional brain changes will not be able to capture the whole variety of changes, thus being limited to patterns shared by most patients. In addition to heterogeneity (sample size), we investigate other factors influencing accuracy and introduce a ML effect size. We derive a simple model of how the different factors, such as sample heterogeneity and study setup determine this ML effect size, and explain the variation in prediction accuracies found from the literature, both in cross-validation and independent sample testing. From this, we argue that smaller-N studies may reach high prediction accuracy at the cost of lower generalizability to other samples. Higher-N studies, on the other hand, will have more generalization power, but at the cost of lower accuracy. In conclusion, when comparing results from different

  2. Rock sampling. [method for controlling particle size distribution

    NASA Technical Reports Server (NTRS)

    Blum, P. (Inventor)

    1971-01-01

    A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

  3. Air sampling filtration media: Collection efficiency for respirable size-selective sampling

    PubMed Central

    Soo, Jhy-Charm; Monaghan, Keenan; Lee, Taekhee; Kashon, Mike; Harper, Martin

    2016-01-01

    The collection efficiencies of commonly used membrane air sampling filters in the ultrafine particle size range were investigated. Mixed cellulose ester (MCE; 0.45, 0.8, 1.2, and 5 μm pore sizes), polycarbonate (0.4, 0.8, 2, and 5 μm pore sizes), polytetrafluoroethylene (PTFE; 0.45, 1, 2, and 5 μm pore sizes), polyvinyl chloride (PVC; 0.8 and 5 μm pore sizes), and silver membrane (0.45, 0.8, 1.2, and 5 μm pore sizes) filters were exposed to polydisperse sodium chloride (NaCl) particles in the size range of 10–400 nm. Test aerosols were nebulized and introduced into a calm air chamber through a diffusion dryer and aerosol neutralizer. The testing filters (37 mm diameter) were mounted in a conductive polypropylene filter-holder (cassette) within a metal testing tube. The experiments were conducted at flow rates between 1.7 and 11.2 l min−1. The particle size distributions of NaCl challenge aerosol were measured upstream and downstream of the test filters by a scanning mobility particle sizer (SMPS). Three different filters of each type with at least three repetitions for each pore size were tested. In general, the collection efficiency varied with airflow, pore size, and sampling duration. In addition, both collection efficiency and pressure drop increased with decreased pore size and increased sampling flow rate, but they differed among filter types and manufacturer. The present study confirmed that the MCE, PTFE, and PVC filters have a relatively high collection efficiency for challenge particles much smaller than their nominal pore size and are considerably more efficient than polycarbonate and silver membrane filters, especially at larger nominal pore sizes. PMID:26834310

  4. An integrated approach for multi-level sample size determination

    SciTech Connect

    Lu, M.S.; Teichmann, T.; Sanborn, J.B.

    1997-12-31

    Inspection procedures involving the sampling of items in a population often require steps of increasingly sensitive measurements, with correspondingly smaller sample sizes; these are referred to as multilevel sampling schemes. In the case of nuclear safeguards inspections verifying that there has been no diversion of Special Nuclear Material (SNM), these procedures have been examined often and increasingly complex algorithms have been developed to implement them. The aim in this paper is to provide an integrated approach, and, in so doing, to describe a systematic, consistent method that proceeds logically from level to level with increasing accuracy. The authors emphasize that the methods discussed are generally consistent with those presented in the references mentioned, and yield comparable results when the error models are the same. However, because of its systematic, integrated approach the proposed method elucidates the conceptual understanding of what goes on, and, in many cases, simplifies the calculations. In nuclear safeguards inspections, an important aspect of verifying nuclear items to detect any possible diversion of nuclear fissile materials is the sampling of such items at various levels of sensitivity. The first step usually is sampling by ``attributes`` involving measurements of relatively low accuracy, followed by further levels of sampling involving greater accuracy. This process is discussed in some detail in the references given; also, the nomenclature is described. Here, the authors outline a coordinated step-by-step procedure for achieving such multilevel sampling, and they develop the relationships between the accuracy of measurement and the sample size required at each stage, i.e., at the various levels. The logic of the underlying procedures is carefully elucidated; the calculations involved and their implications, are clearly described, and the process is put in a form that allows systematic generalization.

  5. GLIMMPSE Lite: Calculating Power and Sample Size on Smartphone Devices

    PubMed Central

    Munjal, Aarti; Sakhadeo, Uttara R.; Muller, Keith E.; Glueck, Deborah H.; Kreidler, Sarah M.

    2014-01-01

    Researchers seeking to develop complex statistical applications for mobile devices face a common set of difficult implementation issues. In this work, we discuss general solutions to the design challenges. We demonstrate the utility of the solutions for a free mobile application designed to provide power and sample size calculations for univariate, one-way analysis of variance (ANOVA), GLIMMPSE Lite. Our design decisions provide a guide for other scientists seeking to produce statistical software for mobile platforms. PMID:25541688

  6. Tooth Wear Prevalence and Sample Size Determination : A Pilot Study

    PubMed Central

    Abd. Karim, Nama Bibi Saerah; Ismail, Noorliza Mastura; Naing, Lin; Ismail, Abdul Rashid

    2008-01-01

    Tooth wear is the non-carious loss of tooth tissue, which results from three processes namely attrition, erosion and abrasion. These can occur in isolation or simultaneously. Very mild tooth wear is a physiological effect of aging. This study aims to estimate the prevalence of tooth wear among 16-year old Malay school children and determine a feasible sample size for further study. Fifty-five subjects were examined clinically, followed by the completion of self-administered questionnaires. Questionnaires consisted of socio-demographic and associated variables for tooth wear obtained from the literature. The Smith and Knight tooth wear index was used to chart tooth wear. Other oral findings were recorded using the WHO criteria. A software programme was used to determine pathological tooth wear. About equal ratio of male to female were involved. It was found that 18.2% of subjects have no tooth wear, 63.6% had very mild tooth wear, 10.9% mild tooth wear, 5.5% moderate tooth wear and 1.8 % severe tooth wear. In conclusion 18.2% of subjects were deemed to have pathological tooth wear (mild, moderate & severe). Exploration with all associated variables gave a sample size ranging from 560 – 1715. The final sample size for further study greatly depends on available time and resources. PMID:22589636

  7. MINSIZE: A Computer Program for Obtaining Minimum Sample Size as an Indicator of Effect Size.

    ERIC Educational Resources Information Center

    Morse, David T.

    1998-01-01

    Describes MINSIZE, an MS-DOS computer program that permits the user to determine the minimum sample size needed for the results of a given analysis to be statistically significant. Program applications for statistical significance tests are presented and illustrated. (SLD)

  8. [Unconditioned logistic regression and sample size: a bibliographic review].

    PubMed

    Ortega Calvo, Manuel; Cayuela Domínguez, Aurelio

    2002-01-01

    Unconditioned logistic regression is a highly useful risk prediction method in epidemiology. This article reviews the different solutions provided by different authors concerning the interface between the calculation of the sample size and the use of logistics regression. Based on the knowledge of the information initially provided, a review is made of the customized regression and predictive constriction phenomenon, the design of an ordinal exposition with a binary output, the event of interest per variable concept, the indicator variables, the classic Freeman equation, etc. Some skeptical ideas regarding this subject are also included. PMID:12025266

  9. Efficient Coalescent Simulation and Genealogical Analysis for Large Sample Sizes

    PubMed Central

    Kelleher, Jerome; Etheridge, Alison M; McVean, Gilean

    2016-01-01

    A central challenge in the analysis of genetic variation is to provide realistic genome simulation across millions of samples. Present day coalescent simulations do not scale well, or use approximations that fail to capture important long-range linkage properties. Analysing the results of simulations also presents a substantial challenge, as current methods to store genealogies consume a great deal of space, are slow to parse and do not take advantage of shared structure in correlated trees. We solve these problems by introducing sparse trees and coalescence records as the key units of genealogical analysis. Using these tools, exact simulation of the coalescent with recombination for chromosome-sized regions over hundreds of thousands of samples is possible, and substantially faster than present-day approximate methods. We can also analyse the results orders of magnitude more quickly than with existing methods. PMID:27145223

  10. Sample size calculation for the one-sample log-rank test.

    PubMed

    Schmidt, René; Kwiecien, Robert; Faldum, Andreas; Berthold, Frank; Hero, Barbara; Ligges, Sandra

    2015-03-15

    An improved method of sample size calculation for the one-sample log-rank test is provided. The one-sample log-rank test may be the method of choice if the survival curve of a single treatment group is to be compared with that of a historic control. Such settings arise, for example, in clinical phase-II trials if the response to a new treatment is measured by a survival endpoint. Present sample size formulas for the one-sample log-rank test are based on the number of events to be observed, that is, in order to achieve approximately a desired power for allocated significance level and effect the trial is stopped as soon as a certain critical number of events are reached. We propose a new stopping criterion to be followed. Both approaches are shown to be asymptotically equivalent. For small sample size, though, a simulation study indicates that the new criterion might be preferred when planning a corresponding trial. In our simulations, the trial is usually underpowered, and the aspired significance level is not exploited if the traditional stopping criterion based on the number of events is used, whereas a trial based on the new stopping criterion maintains power with the type-I error rate still controlled.

  11. Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses

    PubMed Central

    Lanfear, Robert; Hua, Xia; Warren, Dan L.

    2016-01-01

    Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794

  12. Quantum state discrimination bounds for finite sample size

    SciTech Connect

    Audenaert, Koenraad M. R.; Mosonyi, Milan; Verstraete, Frank

    2012-12-15

    In the problem of quantum state discrimination, one has to determine by measurements the state of a quantum system, based on the a priori side information that the true state is one of the two given and completely known states, {rho} or {sigma}. In general, it is not possible to decide the identity of the true state with certainty, and the optimal measurement strategy depends on whether the two possible errors (mistaking {rho} for {sigma}, or the other way around) are treated as of equal importance or not. Results on the quantum Chernoff and Hoeffding bounds and the quantum Stein's lemma show that, if several copies of the system are available then the optimal error probabilities decay exponentially in the number of copies, and the decay rate is given by a certain statistical distance between {rho} and {sigma} (the Chernoff distance, the Hoeffding distances, and the relative entropy, respectively). While these results provide a complete solution to the asymptotic problem, they are not completely satisfying from a practical point of view. Indeed, in realistic scenarios one has access only to finitely many copies of a system, and therefore it is desirable to have bounds on the error probabilities for finite sample size. In this paper we provide finite-size bounds on the so-called Stein errors, the Chernoff errors, the Hoeffding errors, and the mixed error probabilities related to the Chernoff and the Hoeffding errors.

  13. MEPAG Recommendations for a 2018 Mars Sample Return Caching Lander - Sample Types, Number, and Sizes

    NASA Technical Reports Server (NTRS)

    Allen, Carlton C.

    2011-01-01

    The return to Earth of geological and atmospheric samples from the surface of Mars is among the highest priority objectives of planetary science. The MEPAG Mars Sample Return (MSR) End-to-End International Science Analysis Group (MEPAG E2E-iSAG) was chartered to propose scientific objectives and priorities for returned sample science, and to map out the implications of these priorities, including for the proposed joint ESA-NASA 2018 mission that would be tasked with the crucial job of collecting and caching the samples. The E2E-iSAG identified four overarching scientific aims that relate to understanding: (A) the potential for life and its pre-biotic context, (B) the geologic processes that have affected the martian surface, (C) planetary evolution of Mars and its atmosphere, (D) potential for future human exploration. The types of samples deemed most likely to achieve the science objectives are, in priority order: (1A). Subaqueous or hydrothermal sediments (1B). Hydrothermally altered rocks or low temperature fluid-altered rocks (equal priority) (2). Unaltered igneous rocks (3). Regolith, including airfall dust (4). Present-day atmosphere and samples of sedimentary-igneous rocks containing ancient trapped atmosphere Collection of geologically well-characterized sample suites would add considerable value to interpretations of all collected rocks. To achieve this, the total number of rock samples should be about 30-40. In order to evaluate the size of individual samples required to meet the science objectives, the E2E-iSAG reviewed the analytical methods that would likely be applied to the returned samples by preliminary examination teams, for planetary protection (i.e., life detection, biohazard assessment) and, after distribution, by individual investigators. It was concluded that sample size should be sufficient to perform all high-priority analyses in triplicate. In keeping with long-established curatorial practice of extraterrestrial material, at least 40% by

  14. Statistical identifiability and sample size calculations for serial seroepidemiology

    PubMed Central

    Vinh, Dao Nguyen; Boni, Maciej F.

    2015-01-01

    Inference on disease dynamics is typically performed using case reporting time series of symptomatic disease. The inferred dynamics will vary depending on the reporting patterns and surveillance system for the disease in question, and the inference will miss mild or underreported epidemics. To eliminate the variation introduced by differing reporting patterns and to capture asymptomatic or subclinical infection, inferential methods can be applied to serological data sets instead of case reporting data. To reconstruct complete disease dynamics, one would need to collect a serological time series. In the statistical analysis presented here, we consider a particular kind of serological time series with repeated, periodic collections of population-representative serum. We refer to this study design as a serial seroepidemiology (SSE) design, and we base the analysis on our epidemiological knowledge of influenza. We consider a study duration of three to four years, during which a single antigenic type of influenza would be circulating, and we evaluate our ability to reconstruct disease dynamics based on serological data alone. We show that the processes of reinfection, antibody generation, and antibody waning confound each other and are not always statistically identifiable, especially when dynamics resemble a non-oscillating endemic equilibrium behavior. We introduce some constraints to partially resolve this confounding, and we show that transmission rates and basic reproduction numbers can be accurately estimated in SSE study designs. Seasonal forcing is more difficult to identify as serology-based studies only detect oscillations in antibody titers of recovered individuals, and these oscillations are typically weaker than those observed for infected individuals. To accurately estimate the magnitude and timing of seasonal forcing, serum samples should be collected every two months and 200 or more samples should be included in each collection; this sample size estimate

  15. How to optimise the yield of forensic and clinical post-mortem microbiology with an adequate sampling: a proposal for standardisation.

    PubMed

    Fernández-Rodríguez, A; Cohen, M C; Lucena, J; Van de Voorde, W; Angelini, A; Ziyade, N; Saegeman, V

    2015-05-01

    Post-mortem microbiology (PMM) is an important tool in forensic pathology, helping to determine the cause and manner of death, especially in difficult scenarios such as sudden unexpected death (SD). Currently, there is a lack of standardization of PMM sampling throughout Europe. We present recommendations elaborated by a panel of European experts aimed to standardize microbiological sampling in the most frequent forensic and clinical post-mortem situations. A network of forensic microbiologists, pathologists and physicians from Spain, England, Belgium, Italy and Turkey shaped a flexible protocol providing minimal requirements for PMM sampling at four practical scenarios: SD, bioterrorism, tissue and cell transplantation (TCT) and paleomicrobiology. Biosafety recommendations were also included. SD was categorized into four subgroups according to the age of the deceased and circumstances at autopsy: (1) included SD in infancy and childhood (0-16 years); (2) corresponded to SD in the young (17-35 years); (3) comprised SD at any age with clinical symptoms; and (4) included traumatic/iatrogenic SD. For each subgroup, a minimum set of samples and general recommendations for microbiological analyses were established. Sampling recommendations for main bioterrorism scenarios were provided. In the TCT setting, the Belgian sampling protocol was presented as an example. Finally, regarding paleomicrobiology, the sampling selection for different types of human remains was reviewed. This proposal for standardization in the sampling constitutes the first step towards a consensus in PMM procedures. In addition, the protocol flexibility to adapt the sampling to the clinical scenario and specific forensic findings adds a cost-benefit value.

  16. Sample size and allocation of effort in point count sampling of birds in bottomland hardwood forests

    USGS Publications Warehouse

    Smith, W.P.; Twedt, D.J.; Cooper, R.J.; Wiedenfeld, D.A.; Hamel, P.B.; Ford, R.P.; Ralph, C. John; Sauer, John R.; Droege, Sam

    1995-01-01

    To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect of increasing the number of points or visits by comparing results of 150 four-minute point counts obtained from each of four stands on Delta Experimental Forest (DEF) during May 8-May 21, 1991 and May 30-June 12, 1992. For each stand, we obtained bootstrap estimates of mean cumulative number of species each year from all possible combinations of six points and six visits. ANOVA was used to model cumulative species as a function of number of points visited, number of visits to each point, and interaction of points and visits. There was significant variation in numbers of birds and species between regions and localities (nested within region); neither habitat, nor the interaction between region and habitat, was significant. For a = 0.05 and a = 0.10, minimum sample size estimates (per factor level) varied by orders of magnitude depending upon the observed or specified range of desired detectable difference. For observed regional variation, 20 and 40 point counts were required to accommodate variability in total individuals (MSE = 9.28) and species (MSE = 3.79), respectively, whereas ? 25 percent of the mean could be achieved with five counts per factor level. Sample size sufficient to detect actual differences of Wood Thrush (Hylocichla mustelina) was >200, whereas the Prothonotary Warbler (Protonotaria citrea) required <10 counts. Differences in mean cumulative species were detected among number of points visited and among number of visits to a point. In the lower MAV, mean cumulative species increased with each added point through five points and with each additional visit through four visits

  17. Using Ancillary Information to Reduce Sample Size in Discovery Sampling and the Effects of Measurement Error

    SciTech Connect

    Axelrod, M

    2005-08-18

    Discovery sampling is a tool used in a discovery auditing. The purpose of such an audit is to provide evidence that some (usually large) inventory of items complies with a defined set of criteria by inspecting (or measuring) a representative sample drawn from the inventory. If any of the items in the sample fail compliance (defective items), then the audit has discovered an impropriety, which often triggers some action. However finding defective items in a sample is an unusual event--auditors expect the inventory to be in compliance because they come to the audit with an ''innocent until proven guilty attitude''. As part of their work product, the auditors must provide a confidence statement about compliance level of the inventory. Clearly the more items they inspect, the greater their confidence, but more inspection means more cost. Audit costs can be purely economic, but in some cases, the cost is political because more inspection means more intrusion, which communicates an attitude of distrust. Thus, auditors have every incentive to minimize the number of items in the sample. Indeed, in some cases the sample size can be specifically limited by a prior agreement or an ongoing policy. Statements of confidence about the results of a discovery sample generally use the method of confidence intervals. After finding no defectives in the sample, the auditors provide a range of values that bracket the number of defective items that could credibly be in the inventory. They also state a level of confidence for the interval, usually 90% or 95%. For example, the auditors might say: ''We believe that this inventory of 1,000 items contains no more than 10 defectives with a confidence of 95%''. Frequently clients ask their auditors questions such as: How many items do you need to measure to be 95% confident that there are no more than 10 defectives in the entire inventory? Sometimes when the auditors answer with big numbers like ''300'', their clients balk. They balk because a

  18. Evaluation of pump pulsation in respirable size-selective sampling: part II. Changes in sampling efficiency.

    PubMed

    Lee, Eun Gyung; Lee, Taekhee; Kim, Seung Won; Lee, Larry; Flemmer, Michael M; Harper, Martin

    2014-01-01

    This second, and concluding, part of this study evaluated changes in sampling efficiency of respirable size-selective samplers due to air pulsations generated by the selected personal sampling pumps characterized in Part I (Lee E, Lee L, Möhlmann C et al. Evaluation of pump pulsation in respirable size-selective sampling: Part I. Pulsation measurements. Ann Occup Hyg 2013). Nine particle sizes of monodisperse ammonium fluorescein (from 1 to 9 μm mass median aerodynamic diameter) were generated individually by a vibrating orifice aerosol generator from dilute solutions of fluorescein in aqueous ammonia and then injected into an environmental chamber. To collect these particles, 10-mm nylon cyclones, also known as Dorr-Oliver (DO) cyclones, were used with five medium volumetric flow rate pumps. Those were the Apex IS, HFS513, GilAir5, Elite5, and Basic5 pumps, which were found in Part I to generate pulsations of 5% (the lowest), 25%, 30%, 56%, and 70% (the highest), respectively. GK2.69 cyclones were used with the Legacy [pump pulsation (PP) = 15%] and Elite12 (PP = 41%) pumps for collection at high flows. The DO cyclone was also used to evaluate changes in sampling efficiency due to pulse shape. The HFS513 pump, which generates a more complex pulse shape, was compared to a single sine wave fluctuation generated by a piston. The luminescent intensity of the fluorescein extracted from each sample was measured with a luminescence spectrometer. Sampling efficiencies were obtained by dividing the intensity of the fluorescein extracted from the filter placed in a cyclone with the intensity obtained from the filter used with a sharp-edged reference sampler. Then, sampling efficiency curves were generated using a sigmoid function with three parameters and each sampling efficiency curve was compared to that of the reference cyclone by constructing bias maps. In general, no change in sampling efficiency (bias under ±10%) was observed until pulsations exceeded 25% for the

  19. Evaluation of Pump Pulsation in Respirable Size-Selective Sampling: Part II. Changes in Sampling Efficiency

    PubMed Central

    Lee, Eun Gyung; Lee, Taekhee; Kim, Seung Won; Lee, Larry; Flemmer, Michael M.; Harper, Martin

    2015-01-01

    This second, and concluding, part of this study evaluated changes in sampling efficiency of respirable size-selective samplers due to air pulsations generated by the selected personal sampling pumps characterized in Part I (Lee E, Lee L, Möhlmann C et al. Evaluation of pump pulsation in respirable size-selective sampling: Part I. Pulsation measurements. Ann Occup Hyg 2013). Nine particle sizes of monodisperse ammonium fluorescein (from 1 to 9 μm mass median aerodynamic diameter) were generated individually by a vibrating orifice aerosol generator from dilute solutions of fluorescein in aqueous ammonia and then injected into an environmental chamber. To collect these particles, 10-mm nylon cyclones, also known as Dorr-Oliver (DO) cyclones, were used with five medium volumetric flow rate pumps. Those were the Apex IS, HFS513, GilAir5, Elite5, and Basic5 pumps, which were found in Part I to generate pulsations of 5% (the lowest), 25%, 30%, 56%, and 70% (the highest), respectively. GK2.69 cyclones were used with the Legacy [pump pulsation (PP) = 15%] and Elite12 (PP = 41%) pumps for collection at high flows. The DO cyclone was also used to evaluate changes in sampling efficiency due to pulse shape. The HFS513 pump, which generates a more complex pulse shape, was compared to a single sine wave fluctuation generated by a piston. The luminescent intensity of the fluorescein extracted from each sample was measured with a luminescence spectrometer. Sampling efficiencies were obtained by dividing the intensity of the fluorescein extracted from the filter placed in a cyclone with the intensity obtained from the filter used with a sharp-edged reference sampler. Then, sampling efficiency curves were generated using a sigmoid function with three parameters and each sampling efficiency curve was compared to that of the reference cyclone by constructing bias maps. In general, no change in sampling efficiency (bias under ±10%) was observed until pulsations exceeded 25% for the

  20. 40 CFR 80.127 - Sample size guidelines.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) Sample items shall be selected in such a way as to comprise a simple random sample of each relevant...% Expected Error Rate—0% Maximum Tolerable Error Rate—10% (3) Option 3. The auditor may use some other...

  1. 40 CFR 80.127 - Sample size guidelines.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) Sample items shall be selected in such a way as to comprise a simple random sample of each relevant...% Expected Error Rate—0% Maximum Tolerable Error Rate—10% (3) Option 3. The auditor may use some other...

  2. 40 CFR 80.127 - Sample size guidelines.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) Sample items shall be selected in such a way as to comprise a simple random sample of each relevant...% Expected Error Rate—0% Maximum Tolerable Error Rate—10% (3) Option 3. The auditor may use some other...

  3. Comparing Server Energy Use and Efficiency Using Small Sample Sizes

    SciTech Connect

    Coles, Henry C.; Qin, Yong; Price, Phillip N.

    2014-11-01

    This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel and one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a

  4. Central limit theorem for variable size simple random sampling from a finite population

    SciTech Connect

    Wright, T.

    1986-02-01

    This paper introduces a sampling plan for finite populations herein called ''variable size simple random sampling'' and compares properties of estimators based on it with results from the usual fixed size simple random sampling without replacement. Necessary and sufficient conditions (in the spirit of Hajek) for the limiting distribution of the sample total (or sample mean) to be normal are given. 19 refs.

  5. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    ERIC Educational Resources Information Center

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  6. A contemporary decennial global sample of changing agricultural field sizes

    NASA Astrophysics Data System (ADS)

    White, E.; Roy, D. P.

    2011-12-01

    In the last several hundred years agriculture has caused significant human induced Land Cover Land Use Change (LCLUC) with dramatic cropland expansion and a marked increase in agricultural productivity. The size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLUC. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, diffusion of disease pathogens and pests, and loss or degradation in buffers to nutrient, herbicide and pesticide flows. In this study, globally distributed locations with significant contemporary field size change were selected guided by a global map of agricultural yield and literature review and were selected to be representative of different driving forces of field size change (associated with technological innovation, socio-economic conditions, government policy, historic patterns of land cover land use, and environmental setting). Seasonal Landsat data acquired on a decadal basis (for 1980, 1990, 2000 and 2010) were used to extract field boundaries and the temporal changes in field size quantified and their causes discussed.

  7. Sample Size Determination for Regression Models Using Monte Carlo Methods in R

    ERIC Educational Resources Information Center

    Beaujean, A. Alexander

    2014-01-01

    A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…

  8. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  9. 7 CFR 201.43 - Size of sample.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT... seeds of similar or larger size. (e) Two quarts (2.2 liters) of screenings. (f) Vegetable seed...

  10. 7 CFR 201.43 - Size of sample.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... examination: (a) Two ounces (57 grams) of grass seed not otherwise mentioned, white or alsike clover, or seeds not larger than these. (b) Five ounces (142 grams) of red or crimson clover, alfalfa, lespedeza, ryegrass, bromegrass, millet, flax, rape, or seeds of similar size. (c) One pound (454 grams) of...

  11. Finding Alternatives to the Dogma of Power Based Sample Size Calculation: Is a Fixed Sample Size Prospective Meta-Experiment a Potential Alternative?

    PubMed Central

    Tavernier, Elsa; Trinquart, Ludovic; Giraudeau, Bruno

    2016-01-01

    Sample sizes for randomized controlled trials are typically based on power calculations. They require us to specify values for parameters such as the treatment effect, which is often difficult because we lack sufficient prior information. The objective of this paper is to provide an alternative design which circumvents the need for sample size calculation. In a simulation study, we compared a meta-experiment approach to the classical approach to assess treatment efficacy. The meta-experiment approach involves use of meta-analyzed results from 3 randomized trials of fixed sample size, 100 subjects. The classical approach involves a single randomized trial with the sample size calculated on the basis of an a priori-formulated hypothesis. For the sample size calculation in the classical approach, we used observed articles to characterize errors made on the formulated hypothesis. A prospective meta-analysis of data from trials of fixed sample size provided the same precision, power and type I error rate, on average, as the classical approach. The meta-experiment approach may provide an alternative design which does not require a sample size calculation and addresses the essential need for study replication; results may have greater external validity. PMID:27362939

  12. Basic concepts for sample size calculation: Critical step for any clinical trials!

    PubMed Central

    Gupta, KK; Attri, JP; Singh, A; Kaur, H; Kaur, G

    2016-01-01

    Quality of clinical trials has improved steadily over last two decades, but certain areas in trial methodology still require special attention like in sample size calculation. The sample size is one of the basic steps in planning any clinical trial and any negligence in its calculation may lead to rejection of true findings and false results may get approval. Although statisticians play a major role in sample size estimation basic knowledge regarding sample size calculation is very sparse among most of the anesthesiologists related to research including under trainee doctors. In this review, we will discuss how important sample size calculation is for research studies and the effects of underestimation or overestimation of sample size on project's results. We have highlighted the basic concepts regarding various parameters needed to calculate the sample size along with examples. PMID:27375390

  13. 7 CFR 201.43 - Size of sample.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... units. Coated seed for germination test only shall consist of at least 1,000 seed units. ..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT... of samples of agricultural seed, vegetable seed and screenings to be submitted for analysis, test,...

  14. 7 CFR 201.43 - Size of sample.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... units. Coated seed for germination test only shall consist of at least 1,000 seed units. ..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT... of samples of agricultural seed, vegetable seed and screenings to be submitted for analysis, test,...

  15. 7 CFR 201.43 - Size of sample.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... units. Coated seed for germination test only shall consist of at least 1,000 seed units. ..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT... of samples of agricultural seed, vegetable seed and screenings to be submitted for analysis, test,...

  16. Geoscience Education Research Methods: Thinking About Sample Size

    NASA Astrophysics Data System (ADS)

    Slater, S. J.; Slater, T. F.; CenterAstronomy; Physics Education Research

    2011-12-01

    Geoscience education research is at a critical point in which conditions are sufficient to propel our field forward toward meaningful improvements in geosciences education practices. Our field has now reached a point where the outcomes of our research is deemed important to endusers and funding agencies, and where we now have a large number of scientists who are either formally trained in geosciences education research, or who have dedicated themselves to excellence in this domain. At this point we now must collectively work through our epistemology, our rules of what methodologies will be considered sufficiently rigorous, and what data and analysis techniques will be acceptable for constructing evidence. In particular, we have to work out our answer to that most difficult of research questions: "How big should my 'N' be??" This paper presents a very brief answer to that question, addressing both quantitative and qualitative methodologies. Research question/methodology alignment, effect size and statistical power will be discussed, in addition to a defense of the notion that bigger is not always better.

  17. A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models

    ERIC Educational Resources Information Center

    Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.

    2013-01-01

    Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…

  18. Sample Size in Differential Item Functioning: An Application of Hierarchical Linear Modeling

    ERIC Educational Resources Information Center

    Acar, Tulin

    2011-01-01

    The purpose of this study is to examine the number of DIF items detected by HGLM at different sample sizes. Eight different sized data files have been composed. The population of the study is 798307 students who had taken the 2006 OKS Examination. 10727 students of 798307 are chosen by random sampling method as the sample of the study. Turkish,…

  19. Sample size estimation for the van Elteren test--a stratified Wilcoxon-Mann-Whitney test.

    PubMed

    Zhao, Yan D

    2006-08-15

    The van Elteren test is a type of stratified Wilcoxon-Mann-Whitney test for comparing two treatments accounting for strata. In this paper, we study sample size estimation methods for the asymptotic version of the van Elteren test, assuming that the stratum fractions (ratios of each stratum size to the total sample size) and the treatment fractions (ratios of each treatment size to the stratum size) are known in the study design. In particular, we develop three large-sample sample size estimation methods and present a real data example to illustrate the necessary information in the study design phase in order to apply the methods. Simulation studies are conducted to compare the performance of the methods and recommendations are made for method choice. Finally, sample size estimation for the van Elteren test when the stratum fractions are unknown is also discussed.

  20. Sample size reassessment for a two-stage design controlling the false discovery rate.

    PubMed

    Zehetmayer, Sonja; Graf, Alexandra C; Posch, Martin

    2015-11-01

    Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.

  1. Sample size estimation for the van Elteren test--a stratified Wilcoxon-Mann-Whitney test.

    PubMed

    Zhao, Yan D

    2006-08-15

    The van Elteren test is a type of stratified Wilcoxon-Mann-Whitney test for comparing two treatments accounting for strata. In this paper, we study sample size estimation methods for the asymptotic version of the van Elteren test, assuming that the stratum fractions (ratios of each stratum size to the total sample size) and the treatment fractions (ratios of each treatment size to the stratum size) are known in the study design. In particular, we develop three large-sample sample size estimation methods and present a real data example to illustrate the necessary information in the study design phase in order to apply the methods. Simulation studies are conducted to compare the performance of the methods and recommendations are made for method choice. Finally, sample size estimation for the van Elteren test when the stratum fractions are unknown is also discussed. PMID:16372389

  2. Model Choice and Sample Size in Item Response Theory Analysis of Aphasia Tests

    ERIC Educational Resources Information Center

    Hula, William D.; Fergadiotis, Gerasimos; Martin, Nadine

    2012-01-01

    Purpose: The purpose of this study was to identify the most appropriate item response theory (IRT) measurement model for aphasia tests requiring 2-choice responses and to determine whether small samples are adequate for estimating such models. Method: Pyramids and Palm Trees (Howard & Patterson, 1992) test data that had been collected from…

  3. 40 CFR 761.243 - Standard wipe sample method and size.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe Samples § 761.243 Standard wipe sample method and size. (a) Collect a surface sample from a natural gas... June 23, 1987 and revised on April 18, 1991. This document is available on EPA's Web site at...

  4. 40 CFR 761.243 - Standard wipe sample method and size.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe Samples § 761.243 Standard wipe sample method and size. (a) Collect a surface sample from a natural gas... June 23, 1987 and revised on April 18, 1991. This document is available on EPA's Web site at...

  5. 40 CFR 761.243 - Standard wipe sample method and size.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe Samples § 761.243 Standard wipe sample method and size. (a) Collect a surface sample from a natural gas... June 23, 1987 and revised on April 18, 1991. This document is available on EPA's Web site at...

  6. 40 CFR 761.243 - Standard wipe sample method and size.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe Samples § 761.243 Standard wipe sample method and size. (a) Collect a surface sample from a natural gas... June 23, 1987 and revised on April 18, 1991. This document is available on EPA's Web site at...

  7. 40 CFR 761.243 - Standard wipe sample method and size.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe Samples § 761.243 Standard wipe sample method and size. (a) Collect a surface sample from a natural gas... June 23, 1987 and revised on April 18, 1991. This document is available on EPA's Web site at...

  8. Evaluation of design flood estimates with respect to sample size

    NASA Astrophysics Data System (ADS)

    Kobierska, Florian; Engeland, Kolbjorn

    2016-04-01

    Estimation of design floods forms the basis for hazard management related to flood risk and is a legal obligation when building infrastructure such as dams, bridges and roads close to water bodies. Flood inundation maps used for land use planning are also produced based on design flood estimates. In Norway, the current guidelines for design flood estimates give recommendations on which data, probability distribution, and method to use dependent on length of the local record. If less than 30 years of local data is available, an index flood approach is recommended where the local observations are used for estimating the index flood and regional data are used for estimating the growth curve. For 30-50 years of data, a 2 parameter distribution is recommended, and for more than 50 years of data, a 3 parameter distribution should be used. Many countries have national guidelines for flood frequency estimation, and recommended distributions include the log Pearson II, generalized logistic and generalized extreme value distributions. For estimating distribution parameters, ordinary and linear moments, maximum likelihood and Bayesian methods are used. The aim of this study is to r-evaluate the guidelines for local flood frequency estimation. In particular, we wanted to answer the following questions: (i) Which distribution gives the best fit to the data? (ii) Which estimation method provides the best fit to the data? (iii) Does the answer to (i) and (ii) depend on local data availability? To answer these questions we set up a test bench for local flood frequency analysis using data based cross-validation methods. The criteria were based on indices describing stability and reliability of design flood estimates. Stability is used as a criterion since design flood estimates should not excessively depend on the data sample. The reliability indices describe to which degree design flood predictions can be trusted.

  9. Effective sample size: Quick estimation of the effect of related samples in genetic case-control association analyses.

    PubMed

    Yang, Yaning; Remmers, Elaine F; Ogunwole, Chukwuma B; Kastner, Daniel L; Gregersen, Peter K; Li, Wentian

    2011-02-01

    Affected relatives are essential for pedigree linkage analysis, however, they cause a violation of the independent sample assumption in case-control association studies. To avoid the correlation between samples, a common practice is to take only one affected sample per pedigree in association analysis. Although several methods exist in handling correlated samples, they are still not widely used in part because these are not easily implemented, or because they are not widely known. We advocate the effective sample size method as a simple and accessible approach for case-control association analysis with correlated samples. This method modifies the chi-square test statistic, p-value, and 95% confidence interval of the odds-ratio by replacing the apparent number of allele or genotype counts with the effective ones in the standard formula, without the need for specialized computer programs. We present a simple formula for calculating effective sample size for many types of relative pairs and relative sets. For allele frequency estimation, the effective sample size method captures the variance inflation exactly. For genotype frequency, simulations showed that effective sample size provides a satisfactory approximation. A gene which is previously identified as a type 1 diabetes susceptibility locus, the interferon-induced helicase gene (IFIH1), is shown to be significantly associated with rheumatoid arthritis when the effective sample size method is applied. This significant association is not established if only one affected sib per pedigree were used in the association analysis. Relationship between the effective sample size method and other methods - the generalized estimation equation, variance of eigenvalues for correlation matrices, and genomic controls - are discussed.

  10. Effective Sample Size: Quick Estimation of the Effect of Related Samples in Genetic Case-Control Association Analyses

    PubMed Central

    Yang, Yaning; Remmers, Elaine F.; Ogunwole, Chukwuma B.; Kastner, Daniel L.; Gregersen, Peter K.; Li, Wentian

    2011-01-01

    Summary Affected relatives are essential for pedigree linkage analysis, however, they cause a violation of the independent sample assumption in case-control association studies. To avoid the correlation between samples, a common practice is to take only one affected sample per pedigree in association analysis. Although several methods exist in handling correlated samples, they are still not widely used in part because these are not easily implemented, or because they are not widely known. We advocate the effective sample size method as a simple and accessible approach for case-control association analysis with correlated samples. This method modifies the chi-square test statistic, p-value, and 95% confidence interval of the odds-ratio by replacing the apparent number of allele or genotype counts with the effective ones in the standard formula, without the need for specialized computer programs. We present a simple formula for calculating effective sample size for many types of relative pairs and relative sets. For allele frequency estimation, the effective sample size method captures the variance inflation exactly. For genotype frequency, simulations showed that effective sample size provides a satisfactory approximation. A gene which is previously identified as a type 1 diabetes susceptibility locus, the interferon-induced helicase gene (IFIH1), is shown to be significantly associated with rheumatoid arthritis when the effective sample size method is applied. This significant association is not established if only one affected sib per pedigree were used in the association analysis. Relationship between the effective sample size method and other methods – the generalized estimation equation, variance of eigenvalues for correlation matrices, and genomic controls – are discussed. PMID:21333602

  11. Effective sample size: Quick estimation of the effect of related samples in genetic case-control association analyses.

    PubMed

    Yang, Yaning; Remmers, Elaine F; Ogunwole, Chukwuma B; Kastner, Daniel L; Gregersen, Peter K; Li, Wentian

    2011-02-01

    Affected relatives are essential for pedigree linkage analysis, however, they cause a violation of the independent sample assumption in case-control association studies. To avoid the correlation between samples, a common practice is to take only one affected sample per pedigree in association analysis. Although several methods exist in handling correlated samples, they are still not widely used in part because these are not easily implemented, or because they are not widely known. We advocate the effective sample size method as a simple and accessible approach for case-control association analysis with correlated samples. This method modifies the chi-square test statistic, p-value, and 95% confidence interval of the odds-ratio by replacing the apparent number of allele or genotype counts with the effective ones in the standard formula, without the need for specialized computer programs. We present a simple formula for calculating effective sample size for many types of relative pairs and relative sets. For allele frequency estimation, the effective sample size method captures the variance inflation exactly. For genotype frequency, simulations showed that effective sample size provides a satisfactory approximation. A gene which is previously identified as a type 1 diabetes susceptibility locus, the interferon-induced helicase gene (IFIH1), is shown to be significantly associated with rheumatoid arthritis when the effective sample size method is applied. This significant association is not established if only one affected sib per pedigree were used in the association analysis. Relationship between the effective sample size method and other methods - the generalized estimation equation, variance of eigenvalues for correlation matrices, and genomic controls - are discussed. PMID:21333602

  12. Sample Size for Measuring Grammaticality in Preschool Children from Picture-Elicited Language Samples

    ERIC Educational Resources Information Center

    Eisenberg, Sarita L.; Guo, Ling-Yu

    2015-01-01

    Purpose: The purpose of this study was to investigate whether a shorter language sample elicited with fewer pictures (i.e., 7) would yield a percent grammatical utterances (PGU) score similar to that computed from a longer language sample elicited with 15 pictures for 3-year-old children. Method: Language samples were elicited by asking forty…

  13. Sampling bee communities using pan traps: alternative methods increase sample size

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Monitoring of the status of bee populations and inventories of bee faunas require systematic sampling. Efficiency and ease of implementation has encouraged the use of pan traps to sample bees. Efforts to find an optimal standardized sampling method for pan traps have focused on pan trap color. Th...

  14. Particle size distribution and chemical composition of total mixed rations for dairy cattle: water addition and feed sampling effects.

    PubMed

    Arzola-Alvarez, C; Bocanegra-Viezca, J A; Murphy, M R; Salinas-Chavira, J; Corral-Luna, A; Romanos, A; Ruíz-Barrera, O; Rodríguez-Muela, C

    2010-09-01

    Four dairy farms were used to determine the effects of water addition to diets and sample collection location on the particle size distribution and chemical composition of total mixed rations (TMR). Samples were collected weekly from the mixing wagon and from 3 locations in the feed bunk (top, middle, and bottom) for 5 mo (April, May, July, August, and October). Samples were partially dried to determine the effect of moisture on particle size distribution. Particle size distribution was measured using the Penn State Particle Size Separator. Crude protein, neutral detergent fiber, and acid detergent fiber contents were also analyzed. Particle fractions 19 to 8, 8 to 1.18, and <1.18 mm were judged adequate in all TMR for rumen function and milk yield; however, the percentage of material>19 mm was greater than recommended for TMR, according to the guidelines of Cooperative Extension of Pennsylvania State University. The particle size distribution in April differed from that in October, but intermediate months (May, July, and August) had similar particle size distributions. Samples from the bottom of the feed bunk had the highest percentage of particles retained on the 19-mm sieve. Samples from the top and middle of the feed bunk were similar to that from the mixing wagon. Higher percentages of particles were retained on >19, 19 to 8, and 8 to 1.18 mm sieves for wet than dried samples. The reverse was found for particles passing the 1.18-mm sieve. Mean particle size was higher for wet than dried samples. The crude protein, neutral detergent fiber, and acid detergent fiber contents of TMR varied with month of sampling (18-21, 40-57, and 21-34%, respectively) but were within recommended ranges for high-yielding dairy cows. Analyses of TMR particle size distributions are useful for proper feed bunk management and formulation of diets that maintain rumen function and maximize milk production and quality. Water addition may help reduce dust associated with feeding TMR. PMID

  15. Distribution of the two-sample t-test statistic following blinded sample size re-estimation.

    PubMed

    Lu, Kaifeng

    2016-05-01

    We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Effect Size, Statistical Power and Sample Size Requirements for the Bootstrap Likelihood Ratio Test in Latent Class Analysis

    PubMed Central

    Dziak, John J.; Lanza, Stephanie T.; Tan, Xianming

    2014-01-01

    Selecting the number of different classes which will be assumed to exist in the population is an important step in latent class analysis (LCA). The bootstrap likelihood ratio test (BLRT) provides a data-driven way to evaluate the relative adequacy of a (K −1)-class model compared to a K-class model. However, very little is known about how to predict the power or the required sample size for the BLRT in LCA. Based on extensive Monte Carlo simulations, we provide practical effect size measures and power curves which can be used to predict power for the BLRT in LCA given a proposed sample size and a set of hypothesized population parameters. Estimated power curves and tables provide guidance for researchers wishing to size a study to have sufficient power to detect hypothesized underlying latent classes. PMID:25328371

  17. 40 CFR 761.286 - Sample size and procedure for collecting a sample.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) TOXIC SUBSTANCES CONTROL ACT POLYCHLORINATED BIPHENYLS (PCBs) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Sampling To Verify Completion of Self-Implementing Cleanup...

  18. Implications of sampling design and sample size for national carbon accounting systems

    PubMed Central

    2011-01-01

    Background Countries willing to adopt a REDD regime need to establish a national Measurement, Reporting and Verification (MRV) system that provides information on forest carbon stocks and carbon stock changes. Due to the extensive areas covered by forests the information is generally obtained by sample based surveys. Most operational sampling approaches utilize a combination of earth-observation data and in-situ field assessments as data sources. Results We compared the cost-efficiency of four different sampling design alternatives (simple random sampling, regression estimators, stratified sampling, 2-phase sampling with regression estimators) that have been proposed in the scope of REDD. Three of the design alternatives provide for a combination of in-situ and earth-observation data. Under different settings of remote sensing coverage, cost per field plot, cost of remote sensing imagery, correlation between attributes quantified in remote sensing and field data, as well as population variability and the percent standard error over total survey cost was calculated. The cost-efficiency of forest carbon stock assessments is driven by the sampling design chosen. Our results indicate that the cost of remote sensing imagery is decisive for the cost-efficiency of a sampling design. The variability of the sample population impairs cost-efficiency, but does not reverse the pattern of cost-efficiency of the individual design alternatives. Conclusions, brief summary and potential implications Our results clearly indicate that it is important to consider cost-efficiency in the development of forest carbon stock assessments and the selection of remote sensing techniques. The development of MRV-systems for REDD need to be based on a sound optimization process that compares different data sources and sampling designs with respect to their cost-efficiency. This helps to reduce the uncertainties related with the quantification of carbon stocks and to increase the financial

  19. A Comparative Study of Power and Sample Size Calculations for Multivariate General Linear Models

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2003-01-01

    Repeated measures and longitudinal studies arise often in social and behavioral science research. During the planning stage of such studies, the calculations of sample size are of particular interest to the investigators and should be an integral part of the research projects. In this article, we consider the power and sample size calculations for…

  20. Using the Student's "t"-Test with Extremely Small Sample Sizes

    ERIC Educational Resources Information Center

    de Winter, J. C .F.

    2013-01-01

    Researchers occasionally have to work with an extremely small sample size, defined herein as "N" less than or equal to 5. Some methodologists have cautioned against using the "t"-test when the sample size is extremely small, whereas others have suggested that using the "t"-test is feasible in such a case. The present…

  1. 45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false Calculating Sample Size for NYTD Follow-Up Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... REQUIREMENTS APPLICABLE TO TITLE IV-E Pt. 1356, App. C Appendix C to Part 1356—Calculating Sample Size for...

  2. Sample Size Requirements for Structural Equation Models: An Evaluation of Power, Bias, and Solution Propriety

    ERIC Educational Resources Information Center

    Wolf, Erika J.; Harrington, Kelly M.; Clark, Shaunna L.; Miller, Mark W.

    2013-01-01

    Determining sample size requirements for structural equation modeling (SEM) is a challenge often faced by investigators, peer reviewers, and grant writers. Recent years have seen a large increase in SEMs in the behavioral science literature, but consideration of sample size requirements for applied SEMs often relies on outdated rules-of-thumb.…

  3. Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Marin-Martinez, Fulgencio; Sanchez-Meca, Julio

    2010-01-01

    Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…

  4. Thermomagnetic behavior of magnetic susceptibility - heating rate and sample size effects

    NASA Astrophysics Data System (ADS)

    Jordanova, Diana; Jordanova, Neli

    2015-12-01

    Thermomagnetic analysis of magnetic susceptibility k(T) was carried out for a number of natural powder materials from soils, baked clay and anthropogenic dust samples using fast (11oC/min) and slow (6.5oC/min) heating rates available in the furnace of Kappabridge KLY2 (Agico). Based on the additional data for mineralogy, grain size and magnetic properties of the studied samples, behaviour of k(T) cycles and the observed differences in the curves for fast and slow heating rate are interpreted in terms of mineralogical transformations and Curie temperatures (Tc). The effect of different sample size is also explored, using large volume and small volume of powder material. It is found that soil samples show enhanced information on mineralogical transformations and appearance of new strongly magnetic phases when using fast heating rate and large sample size. This approach moves the transformation at higher temperature, but enhances the amplitude of the signal of newly created phase. Large sample size gives prevalence of the local micro- environment, created by evolving gases, released during transformations. The example from archeological brick reveals the effect of different sample sizes on the observed Curie temperatures on heating and cooling curves, when the magnetic carrier is substituted magnetite (Mn0.2Fe2.70O4). Large sample size leads to bigger differences in Tcs on heating and cooling, while small sample size results in similar Tcs for both heating rates.

  5. Importance of Sample Size for the Estimation of Repeater F Waves in Amyotrophic Lateral Sclerosis

    PubMed Central

    Fang, Jia; Liu, Ming-Sheng; Guan, Yu-Zhou; Cui, Bo; Cui, Li-Ying

    2015-01-01

    Background: In amyotrophic lateral sclerosis (ALS), repeater F waves are increased. Accurate assessment of repeater F waves requires an adequate sample size. Methods: We studied the F waves of left ulnar nerves in ALS patients. Based on the presence or absence of pyramidal signs in the left upper limb, the ALS patients were divided into two groups: One group with pyramidal signs designated as P group and the other without pyramidal signs designated as NP group. The Index repeating neurons (RN) and Index repeater F waves (Freps) were compared among the P, NP and control groups following 20 and 100 stimuli respectively. For each group, the Index RN and Index Freps obtained from 20 and 100 stimuli were compared. Results: In the P group, the Index RN (P = 0.004) and Index Freps (P = 0.001) obtained from 100 stimuli were significantly higher than from 20 stimuli. For F waves obtained from 20 stimuli, no significant differences were identified between the P and NP groups for Index RN (P = 0.052) and Index Freps (P = 0.079); The Index RN (P < 0.001) and Index Freps (P < 0.001) of the P group were significantly higher than the control group; The Index RN (P = 0.002) of the NP group was significantly higher than the control group. For F waves obtained from 100 stimuli, the Index RN (P < 0.001) and Index Freps (P < 0.001) of the P group were significantly higher than the NP group; The Index RN (P < 0.001) and Index Freps (P < 0.001) of the P and NP groups were significantly higher than the control group. Conclusions: Increased repeater F waves reflect increased excitability of motor neuron pool and indicate upper motor neuron dysfunction in ALS. For an accurate evaluation of repeater F waves in ALS patients especially those with moderate to severe muscle atrophy, 100 stimuli would be required. PMID:25673456

  6. Effect of sampling size on the determination of accurate pesticide residue levels in Japanese agricultural commodities.

    PubMed

    Fujita, Masahiro; Yajima, Tomonari; Iijima, Kazuaki; Sato, Kiyoshi

    2012-05-01

    The uncertainty in pesticide residue levels (UPRL) associated with sampling size was estimated using individual acetamiprid and cypermethrin residue data from preharvested apple, broccoli, cabbage, grape, and sweet pepper samples. The relative standard deviation from the mean of each sampling size (n = 2(x), where x = 1-6) of randomly selected samples was defined as the UPRL for each sampling size. The estimated UPRLs, which were calculated on the basis of the regulatory sampling size recommended by the OECD Guidelines on Crop Field Trials (weights from 1 to 5 kg, and commodity unit numbers from 12 to 24), ranged from 2.1% for cypermethrin in sweet peppers to 14.6% for cypermethrin in cabbage samples. The percentages of commodity exceeding the maximum residue limits (MRLs) specified by the Japanese Food Sanitation Law may be predicted from the equation derived from this study, which was based on samples of various size ranges with mean residue levels below the MRL. The estimated UPRLs have confirmed that sufficient sampling weight and numbers are required for analysis and/or re-examination of subsamples to provide accurate values of pesticide residue levels for the enforcement of MRLs. The equation derived from the present study would aid the estimation of more accurate residue levels even from small sampling sizes. PMID:22475588

  7. Effect of sample size in the evaluation of "in-field" sampling plans for aflatoxin B(1) determination in corn.

    PubMed

    Brera, Carlo; De Santis, Barbara; Prantera, Elisabetta; Debegnach, Francesca; Pannunzi, Elena; Fasano, Floriana; Berdini, Clara; Slate, Andrew B; Miraglia, Marina; Whitaker, Thomas B

    2010-08-11

    Use of proper sampling methods throughout the agri-food chain is crucial when it comes to effectively detecting contaminants in foods and feeds. The objective of the study was to estimate the performance of sampling plan designs to determine aflatoxin B(1) (AFB(1)) contamination in corn fields. A total of 840 ears were selected from a corn field suspected of being contaminated with aflatoxin. The mean and variance among the aflatoxin values for each ear were 10.6 mug/kg and 2233.3, respectively. The variability and confidence intervals associated with sample means of a given size could be predicted using an equation associated with the normal distribution. Sample sizes of 248 and 674 ears would be required to estimate the true field concentration of 10.6 mug/kg within +/-50 and +/-30%, respectively. Using the distribution information from the study, operating characteristic curves were developed to show the performance of various sampling plan designs.

  8. Detecting spatial structures in throughfall data: the effect of extent, sample size, sampling design, and variogram estimation method

    NASA Astrophysics Data System (ADS)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-04-01

    In the last three decades, an increasing number of studies analyzed spatial patterns in throughfall to investigate the consequences of rainfall redistribution for biogeochemical and hydrological processes in forests. In the majority of cases, variograms were used to characterize the spatial properties of the throughfall data. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and an appropriate layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation methods on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with heavy outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling), and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least

  9. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    NASA Astrophysics Data System (ADS)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  10. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    PubMed

    Chaibub Neto, Elias

    2015-01-01

    In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling. PMID:26125965

  11. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    PubMed

    Chaibub Neto, Elias

    2015-01-01

    In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.

  12. Sample Size under Inverse Negative Binomial Group Testing for Accuracy in Parameter Estimation

    PubMed Central

    Montesinos-López, Osval Antonio; Montesinos-López, Abelardo; Crossa, José; Eskridge, Kent

    2012-01-01

    Background The group testing method has been proposed for the detection and estimation of genetically modified plants (adventitious presence of unwanted transgenic plants, AP). For binary response variables (presence or absence), group testing is efficient when the prevalence is low, so that estimation, detection, and sample size methods have been developed under the binomial model. However, when the event is rare (low prevalence <0.1), and testing occurs sequentially, inverse (negative) binomial pooled sampling may be preferred. Methodology/Principal Findings This research proposes three sample size procedures (two computational and one analytic) for estimating prevalence using group testing under inverse (negative) binomial sampling. These methods provide the required number of positive pools (), given a pool size (k), for estimating the proportion of AP plants using the Dorfman model and inverse (negative) binomial sampling. We give real and simulated examples to show how to apply these methods and the proposed sample-size formula. The Monte Carlo method was used to study the coverage and level of assurance achieved by the proposed sample sizes. An R program to create other scenarios is given in Appendix S2. Conclusions The three methods ensure precision in the estimated proportion of AP because they guarantee that the width (W) of the confidence interval (CI) will be equal to, or narrower than, the desired width (), with a probability of . With the Monte Carlo study we found that the computational Wald procedure (method 2) produces the more precise sample size (with coverage and assurance levels very close to nominal values) and that the samples size based on the Clopper-Pearson CI (method 1) is conservative (overestimates the sample size); the analytic Wald sample size method we developed (method 3) sometimes underestimated the optimum number of pools. PMID:22457714

  13. Optimal and maximin sample sizes for multicentre cost-effectiveness trials.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2015-10-01

    This paper deals with the optimal sample sizes for a multicentre trial in which the cost-effectiveness of two treatments in terms of net monetary benefit is studied. A bivariate random-effects model, with the treatment-by-centre interaction effect being random and the main effect of centres fixed or random, is assumed to describe both costs and effects. The optimal sample sizes concern the number of centres and the number of individuals per centre in each of the treatment conditions. These numbers maximize the efficiency or power for given research costs or minimize the research costs at a desired level of efficiency or power. Information on model parameters and sampling costs are required to calculate these optimal sample sizes. In case of limited information on relevant model parameters, sample size formulas are derived for so-called maximin sample sizes which guarantee a power level at the lowest study costs. Four different maximin sample sizes are derived based on the signs of the lower bounds of two model parameters, with one case being worst compared to others. We numerically evaluate the efficiency of the worst case instead of using others. Finally, an expression is derived for calculating optimal and maximin sample sizes that yield sufficient power to test the cost-effectiveness of two treatments. PMID:25656551

  14. Effects of Sample Size on Estimates of Population Growth Rates Calculated with Matrix Models

    PubMed Central

    Fiske, Ian J.; Bruna, Emilio M.; Bolker, Benjamin M.

    2008-01-01

    Background Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (λ) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of λ–Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of λ due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of λ. Methodology/Principal Findings Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating λ for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of λ with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. Conclusions/Significance We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities. PMID:18769483

  15. Sample Size for Measuring Grammaticality in Preschool Children From Picture-Elicited Language Samples

    PubMed Central

    Guo, Ling-Yu

    2015-01-01

    Purpose The purpose of this study was to investigate whether a shorter language sample elicited with fewer pictures (i.e., 7) would yield a percent grammatical utterances (PGU) score similar to that computed from a longer language sample elicited with 15 pictures for 3-year-old children. Method Language samples were elicited by asking forty 3-year-old children with varying language skills to talk about pictures in response to prompts. PGU scores were computed for each of two 7-picture sets and for the full set of 15 pictures. Results PGU scores for the two 7-picture sets did not differ significantly from, and were highly correlated with, PGU scores for the full set and with each other. Agreement for making pass–fail decisions between each 7-picture set and the full set and between the two 7-picture sets ranged from 80% to 100%. Conclusion The current study suggests that the PGU measure is robust enough that it can be computed on the basis of 7, at least in 3-year-old children whose language samples were elicited using similar procedures. PMID:25615691

  16. On the Importance of Accounting for Competing Risks in Pediatric Brain Cancer: II. Regression Modeling and Sample Size

    SciTech Connect

    Tai, Bee-Choo; Grundy, Richard; Machin, David

    2011-03-15

    Purpose: To accurately model the cumulative need for radiotherapy in trials designed to delay or avoid irradiation among children with malignant brain tumor, it is crucial to account for competing events and evaluate how each contributes to the timing of irradiation. An appropriate choice of statistical model is also important for adequate determination of sample size. Methods and Materials: We describe the statistical modeling of competing events (A, radiotherapy after progression; B, no radiotherapy after progression; and C, elective radiotherapy) using proportional cause-specific and subdistribution hazard functions. The procedures of sample size estimation based on each method are outlined. These are illustrated by use of data comparing children with ependymoma and other malignant brain tumors. The results from these two approaches are compared. Results: The cause-specific hazard analysis showed a reduction in hazards among infants with ependymoma for all event types, including Event A (adjusted cause-specific hazard ratio, 0.76; 95% confidence interval, 0.45-1.28). Conversely, the subdistribution hazard analysis suggested an increase in hazard for Event A (adjusted subdistribution hazard ratio, 1.35; 95% confidence interval, 0.80-2.30), but the reduction in hazards for Events B and C remained. Analysis based on subdistribution hazard requires a larger sample size than the cause-specific hazard approach. Conclusions: Notable differences in effect estimates and anticipated sample size were observed between methods when the main event showed a beneficial effect whereas the competing events showed an adverse effect on the cumulative incidence. The subdistribution hazard is the most appropriate for modeling treatment when its effects on both the main and competing events are of interest.

  17. Optimal designs of the median run length based double sampling X chart for minimizing the average sample size.

    PubMed

    Teoh, Wei Lin; Khoo, Michael B C; Teh, Sin Yin

    2013-01-01

    Designs of the double sampling (DS) X chart are traditionally based on the average run length (ARL) criterion. However, the shape of the run length distribution changes with the process mean shifts, ranging from highly skewed when the process is in-control to almost symmetric when the mean shift is large. Therefore, we show that the ARL is a complicated performance measure and that the median run length (MRL) is a more meaningful measure to depend on. This is because the MRL provides an intuitive and a fair representation of the central tendency, especially for the rightly skewed run length distribution. Since the DS X chart can effectively reduce the sample size without reducing the statistical efficiency, this paper proposes two optimal designs of the MRL-based DS X chart, for minimizing (i) the in-control average sample size (ASS) and (ii) both the in-control and out-of-control ASSs. Comparisons with the optimal MRL-based EWMA X and Shewhart X charts demonstrate the superiority of the proposed optimal MRL-based DS X chart, as the latter requires a smaller sample size on the average while maintaining the same detection speed as the two former charts. An example involving the added potassium sorbate in a yoghurt manufacturing process is used to illustrate the effectiveness of the proposed MRL-based DS X chart in reducing the sample size needed. PMID:23935873

  18. Optimal Designs of the Median Run Length Based Double Sampling X̄ Chart for Minimizing the Average Sample Size

    PubMed Central

    Teoh, Wei Lin; Khoo, Michael B. C.; Teh, Sin Yin

    2013-01-01

    Designs of the double sampling (DS) chart are traditionally based on the average run length (ARL) criterion. However, the shape of the run length distribution changes with the process mean shifts, ranging from highly skewed when the process is in-control to almost symmetric when the mean shift is large. Therefore, we show that the ARL is a complicated performance measure and that the median run length (MRL) is a more meaningful measure to depend on. This is because the MRL provides an intuitive and a fair representation of the central tendency, especially for the rightly skewed run length distribution. Since the DS chart can effectively reduce the sample size without reducing the statistical efficiency, this paper proposes two optimal designs of the MRL-based DS chart, for minimizing (i) the in-control average sample size (ASS) and (ii) both the in-control and out-of-control ASSs. Comparisons with the optimal MRL-based EWMA and Shewhart charts demonstrate the superiority of the proposed optimal MRL-based DS chart, as the latter requires a smaller sample size on the average while maintaining the same detection speed as the two former charts. An example involving the added potassium sorbate in a yoghurt manufacturing process is used to illustrate the effectiveness of the proposed MRL-based DS chart in reducing the sample size needed. PMID:23935873

  19. Parameter Estimation with Small Sample Size: A Higher-Order IRT Model Approach

    ERIC Educational Resources Information Center

    de la Torre, Jimmy; Hong, Yuan

    2010-01-01

    Sample size ranks as one of the most important factors that affect the item calibration task. However, due to practical concerns (e.g., item exposure) items are typically calibrated with much smaller samples than what is desired. To address the need for a more flexible framework that can be used in small sample item calibration, this article…

  20. Uniform power method for sample size calculation in historical control studies with binary response.

    PubMed

    Lee, J J; Tseng, C

    2001-08-01

    Makuch and Simon gave a sample size calculation formula for historical control (HC) studies that assumed that the observed response rate in the control group is the true response rate. We dropped this assumption and computed the expected power and expected sample size to evaluate the performance of the procedure under the omniscient model. When there is uncertainty in the HC response rate but this uncertainty is not considered, Makuch and Simon's method produces a sample size that gives a considerably lower power than that specified. Even the larger sample size obtained from the randomized design formula and applied to the HC setting does not guarantee the advertised power in the HC setting. We developed a new uniform power method to search for the sample size required for the experimental group to yield an exact power without relying on the estimated HC response rate being perfectly correct. The new method produces the correct uniform predictive power for all permissible response rates. The resulting sample size is closer to the sample size needed for the randomized design than Makuch and Simon's method, especially when there is a small difference in response rates or a limited sample size in the HC group. HC design may be a viable option in clinical trials when the patient selection bias and the outcome evaluation bias can be minimized. However, the common perception of the extra sample size savings is largely unjustified without the strong assumption that the observed HC response rate is equal to the true control response rate. Generally speaking, results from HC studies need to be confirmed by studies with concurrent controls and cannot be used for making definitive decisions.

  1. Three-year-olds obey the sample size principle of induction: the influence of evidence presentation and sample size disparity on young children's generalizations.

    PubMed

    Lawson, Chris A

    2014-07-01

    Three experiments with 81 3-year-olds (M=3.62years) examined the conditions that enable young children to use the sample size principle (SSP) of induction-the inductive rule that facilitates generalizations from large rather than small samples of evidence. In Experiment 1, children exhibited the SSP when exemplars were presented sequentially but not when exemplars were presented simultaneously. Results from Experiment 3 suggest that the advantage of sequential presentation is not due to the additional time to process the available input from the two samples but instead may be linked to better memory for specific individuals in the large sample. In addition, findings from Experiments 1 and 2 suggest that adherence to the SSP is mediated by the disparity between presented samples. Overall, these results reveal that the SSP appears early in development and is guided by basic cognitive processes triggered during the acquisition of input.

  2. A behavioural Bayes approach to the determination of sample size for clinical trials considering efficacy and safety: imbalanced sample size in treatment groups.

    PubMed

    Kikuchi, Takashi; Gittins, John

    2011-08-01

    The behavioural Bayes approach to sample size determination for clinical trials assumes that the number of subsequent patients switching to a new drug from the current drug depends on the strength of the evidence for efficacy and safety that was observed in the clinical trials. The optimal sample size is the one which maximises the expected net benefit of the trial. The approach has been developed in a series of papers by Pezeshk and the present authors (Gittins JC, Pezeshk H. A behavioral Bayes method for determining the size of a clinical trial. Drug Information Journal 2000; 34: 355-63; Gittins JC, Pezeshk H. How Large should a clinical trial be? The Statistician 2000; 49(2): 177-87; Gittins JC, Pezeshk H. A decision theoretic approach to sample size determination in clinical trials. Journal of Biopharmaceutical Statistics 2002; 12(4): 535-51; Gittins JC, Pezeshk H. A fully Bayesian approach to calculating sample sizes for clinical trials with binary responses. Drug Information Journal 2002; 36: 143-50; Kikuchi T, Pezeshk H, Gittins J. A Bayesian cost-benefit approach to the determination of sample size in clinical trials. Statistics in Medicine 2008; 27(1): 68-82; Kikuchi T, Gittins J. A behavioral Bayes method to determine the sample size of a clinical trial considering efficacy and safety. Statistics in Medicine 2009; 28(18): 2293-306; Kikuchi T, Gittins J. A Bayesian procedure for cost-benefit evaluation of a new drug in multi-national clinical trials. Statistics in Medicine 2009 (Submitted)). The purpose of this article is to provide a rationale for experimental designs which allocate more patients to the new treatment than to the control group. The model uses a logistic weight function, including an interaction term linking efficacy and safety, which determines the number of patients choosing the new drug, and hence the resulting benefit. A Monte Carlo simulation is employed for the calculation. Having a larger group of patients on the new drug in general

  3. Mineralogical, optical, geochemical, and particle size properties of four sediment samples for optical physics research

    NASA Technical Reports Server (NTRS)

    Bice, K.; Clement, S. C.

    1981-01-01

    X-ray diffraction and spectroscopy were used to investigate the mineralogical and chemical properties of the Calvert, Ball Old Mine, Ball Martin, and Jordan Sediments. The particle size distribution and index of refraction of each sample were determined. The samples are composed primarily of quartz, kaolinite, and illite. The clay minerals are most abundant in the finer particle size fractions. The chemical properties of the four samples are similar. The Calvert sample is most notably different in that it contains a relatively high amount of iron. The dominant particle size fraction in each sample is silt, with lesser amounts of clay and sand. The indices of refraction of the sediments are the same with the exception of the Calvert sample which has a slightly higher value.

  4. Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology

    PubMed Central

    Vavrek, Matthew J.

    2015-01-01

    Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770

  5. Procedures manual for the recommended ARB (Air Resources Board) sized chemical sample method (cascade cyclones)

    SciTech Connect

    McCain, J.D.; Dawes, S.S.; Farthing, W.E.

    1986-05-01

    The report is Attachment No. 2 to the Final Report of ARB Contract A3-092-32 and provides a tutorial on the use of Cascade (Series) Cyclones to obtain size-fractionated particulate samples from industrial flue gases at stationary sources. The instrumentation and procedures described are designed to protect the purity of the collected samples so that post-test chemical analysis may be performed for organic and inorganic compounds, including instrumental analysis for trace elements. The instrumentation described collects bulk quantities for each of six size fractions over the range 10 to 0.4 micrometer diameter. The report describes the operating principles, calibration, and empirical modeling of small cyclone performance. It also discusses the preliminary calculations, operation, sample retrieval, and data analysis associated with the use of cyclones to obtain size-segregated samples and to measure particle-size distributions.

  6. Bulk particle size distribution and magnetic properties of particle-sized fractions from loess and paleosol samples in Central Asia

    NASA Astrophysics Data System (ADS)

    Zan, Jinbo; Fang, Xiaomin; Yang, Shengli; Yan, Maodu

    2015-01-01

    studies demonstrate that particle size separation based on gravitational settling and detailed rock magnetic measurements of the resulting fractionated samples constitutes an effective approach to evaluating the relative contributions of pedogenic and detrital components in the loess and paleosol sequences on the Chinese Loess Plateau. So far, however, similar work has not been undertaken on the loess deposits in Central Asia. In this paper, 17 loess and paleosol samples from three representative loess sections in Central Asia were separated into four grain size fractions, and then systematic rock magnetic measurements were made on the fractions. Our results demonstrate that the content of the <4 μm fraction in the Central Asian loess deposits is relatively low and that the samples generally have a unimodal particle distribution with a peak in the medium-coarse silt range. We find no significant difference between the particle size distributions obtained by the laser diffraction and the pipette and wet sieving methods. Rock magnetic studies further demonstrate that the medium-coarse silt fraction (e.g., the 20-75 μm fraction) provides the main control on the magnetic properties of the loess and paleosol samples in Central Asia. The contribution of pedogenically produced superparamagnetic (SP) and stable single-domain (SD) magnetic particles to the bulk magnetic properties is very limited. In addition, the coarsest fraction (>75 μm) exhibits the minimum values of χ, χARM, and SIRM, demonstrating that the concentrations of ferrimagnetic grains are not positively correlated with the bulk particle size in the Central Asian loess deposits.

  7. Sample Size Calculation of Clinical Trials Published in Two Leading Endodontic Journals

    PubMed Central

    Shahravan, Arash; Haghdoost, Ali-Akbar; Rad, Maryam; Hashemipoor, Maryamalsadat; Sharifi, Maryam

    2014-01-01

    Introduction: The purpose of this article was to evaluate the quality of sample size calculation reports in published clinical trials in Journal of Endodontics and International Endodontic Journal in years 2000-1 and 2009-10. Materials and Methods: Articles fulfilling the inclusion criteria were collected. The criteria were: publication year, research design, types of control group, reporting sample size calculation, the number of participants in each group, study outcome, amount of type I (α) and II (β) errors, method used for estimating prevalence or standard deviation, percentage of meeting the expected sample size and considering clinically importance level in sample size calculation. Data were extracted from all included articles. Descriptive analyses were conducted. Inferential statistical analyses were done using independent T-test and Chi-square test with the significance level set at 0.05. Results: There was a statistically significant increase in years between 2009 and 10 compared to 2000-1 in terms of reporting sample size calculation (P=0.002), reporting clinically importance level (P=0.003) and in samples size of clinical trials (P=0.01). But there was not any significant difference between two journals in terms of reporting sample size calculation, type of control group, frequency of various study designs and frequency of positive and negative clinical trials in different time periods (P>0.05). Conclusion: Sample size calculation in endodontic clinical trials improved significantly in 2009-10 when compared to 2000-1; however further improvements would be desirable. PMID:24396377

  8. Sample Size Planning for Longitudinal Models: Accuracy in Parameter Estimation for Polynomial Change Parameters

    ERIC Educational Resources Information Center

    Kelley, Ken; Rausch, Joseph R.

    2011-01-01

    Longitudinal studies are necessary to examine individual change over time, with group status often being an important variable in explaining some individual differences in change. Although sample size planning for longitudinal studies has focused on statistical power, recent calls for effect sizes and their corresponding confidence intervals…

  9. The Impact of Sample Size and Other Factors When Estimating Multilevel Logistic Models

    ERIC Educational Resources Information Center

    Schoeneberger, Jason A.

    2016-01-01

    The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…

  10. An opportunity cost approach to sample size calculation in cost-effectiveness analysis.

    PubMed

    Gafni, A; Walter, S D; Birch, S; Sendi, P

    2008-01-01

    The inclusion of economic evaluations as part of clinical trials has led to concerns about the adequacy of trial sample size to support such analysis. The analytical tool of cost-effectiveness analysis is the incremental cost-effectiveness ratio (ICER), which is compared with a threshold value (lambda) as a method to determine the efficiency of a health-care intervention. Accordingly, many of the methods suggested to calculating the sample size requirements for the economic component of clinical trials are based on the properties of the ICER. However, use of the ICER and a threshold value as a basis for determining efficiency has been shown to be inconsistent with the economic concept of opportunity cost. As a result, the validity of the ICER-based approaches to sample size calculations can be challenged. Alternative methods for determining improvements in efficiency have been presented in the literature that does not depend upon ICER values. In this paper, we develop an opportunity cost approach to calculating sample size for economic evaluations alongside clinical trials, and illustrate the approach using a numerical example. We compare the sample size requirement of the opportunity cost method with the ICER threshold method. In general, either method may yield the larger required sample size. However, the opportunity cost approach, although simple to use, has additional data requirements. We believe that the additional data requirements represent a small price to pay for being able to perform an analysis consistent with both concept of opportunity cost and the problem faced by decision makers.

  11. Exact Power and Sample Size Calculations for the Two One-Sided Tests of Equivalence

    PubMed Central

    Shieh, Gwowen

    2016-01-01

    Equivalent testing has been strongly recommended for demonstrating the comparability of treatment effects in a wide variety of research fields including medical studies. Although the essential properties of the favorable two one-sided tests of equivalence have been addressed in the literature, the associated power and sample size calculations were illustrated mainly for selecting the most appropriate approximate method. Moreover, conventional power analysis does not consider the allocation restrictions and cost issues of different sample size choices. To extend the practical usefulness of the two one-sided tests procedure, this article describes exact approaches to sample size determinations under various allocation and cost considerations. Because the presented features are not generally available in common software packages, both R and SAS computer codes are presented to implement the suggested power and sample size computations for planning equivalence studies. The exact power function of the TOST procedure is employed to compute optimal sample sizes under four design schemes allowing for different allocation and cost concerns. The proposed power and sample size methodology should be useful for medical sciences to plan equivalence studies. PMID:27598468

  12. A margin based approach to determining sample sizes via tolerance bounds.

    SciTech Connect

    Newcomer, Justin T.; Freeland, Katherine Elizabeth

    2013-09-01

    This paper proposes a tolerance bound approach for determining sample sizes. With this new methodology we begin to think of sample size in the context of uncertainty exceeding margin. As the sample size decreases the uncertainty in the estimate of margin increases. This can be problematic when the margin is small and only a few units are available for testing. In this case there may be a true underlying positive margin to requirements but the uncertainty may be too large to conclude we have sufficient margin to those requirements with a high level of statistical confidence. Therefore, we provide a methodology for choosing a sample size large enough such that an estimated QMU uncertainty based on the tolerance bound approach will be smaller than the estimated margin (assuming there is positive margin). This ensures that the estimated tolerance bound will be within performance requirements and the tolerance ratio will be greater than one, supporting a conclusion that we have sufficient margin to the performance requirements. In addition, this paper explores the relationship between margin, uncertainty, and sample size and provides an approach and recommendations for quantifying risk when sample sizes are limited.

  13. Exact Power and Sample Size Calculations for the Two One-Sided Tests of Equivalence.

    PubMed

    Shieh, Gwowen

    2016-01-01

    Equivalent testing has been strongly recommended for demonstrating the comparability of treatment effects in a wide variety of research fields including medical studies. Although the essential properties of the favorable two one-sided tests of equivalence have been addressed in the literature, the associated power and sample size calculations were illustrated mainly for selecting the most appropriate approximate method. Moreover, conventional power analysis does not consider the allocation restrictions and cost issues of different sample size choices. To extend the practical usefulness of the two one-sided tests procedure, this article describes exact approaches to sample size determinations under various allocation and cost considerations. Because the presented features are not generally available in common software packages, both R and SAS computer codes are presented to implement the suggested power and sample size computations for planning equivalence studies. The exact power function of the TOST procedure is employed to compute optimal sample sizes under four design schemes allowing for different allocation and cost concerns. The proposed power and sample size methodology should be useful for medical sciences to plan equivalence studies. PMID:27598468

  14. Minimum Sample Size for Cronbach's Coefficient Alpha: A Monte-Carlo Study

    ERIC Educational Resources Information Center

    Yurdugul, Halil

    2008-01-01

    The coefficient alpha is the most widely used measure of internal consistency for composite scores in the educational and psychological studies. However, due to the difficulties of data gathering in psychometric studies, the minimum sample size for the sample coefficient alpha has been frequently debated. There are various suggested minimum sample…

  15. Sampling Theory and Confidence Intervals for Effect Sizes: Using ESCI To Illustrate "Bouncing"; Confidence Intervals.

    ERIC Educational Resources Information Center

    Du, Yunfei

    This paper discusses the impact of sampling error on the construction of confidence intervals around effect sizes. Sampling error affects the location and precision of confidence intervals. Meta-analytic resampling demonstrates that confidence intervals can haphazardly bounce around the true population parameter. Special software with graphical…

  16. Regularization Methods for Fitting Linear Models with Small Sample Sizes: Fitting the Lasso Estimator Using R

    ERIC Educational Resources Information Center

    Finch, W. Holmes; Finch, Maria E. Hernandez

    2016-01-01

    Researchers and data analysts are sometimes faced with the problem of very small samples, where the number of variables approaches or exceeds the overall sample size; i.e. high dimensional data. In such cases, standard statistical models such as regression or analysis of variance cannot be used, either because the resulting parameter estimates…

  17. Norm Block Sample Sizes: A Review of 17 Individually Administered Intelligence Tests

    ERIC Educational Resources Information Center

    Norfolk, Philip A.; Farmer, Ryan L.; Floyd, Randy G.; Woods, Isaac L.; Hawkins, Haley K.; Irby, Sarah M.

    2015-01-01

    The representativeness, recency, and size of norm samples strongly influence the accuracy of inferences drawn from their scores. Inadequate norm samples may lead to inflated or deflated scores for individuals and poorer prediction of developmental and academic outcomes. The purpose of this study was to apply Kranzler and Floyd's method for…

  18. Computer program for sample sizes required to determine disease incidence in fish populations

    USGS Publications Warehouse

    Ossiander, Frank J.; Wedemeyer, Gary

    1973-01-01

    A computer program is described for generating the sample size tables required in fish hatchery disease inspection and certification. The program was designed to aid in detection of infectious pancreatic necrosis (IPN) in salmonids, but it is applicable to any fish disease inspection when the sampling plan follows the hypergeometric distribution.

  19. The Maximal Value of a Zipf Size Variable: Sampling Properties and Relationship to Other Parameters.

    ERIC Educational Resources Information Center

    Tague, Jean; Nicholls, Paul

    1987-01-01

    Examines relationships among the parameters of the Zipf size-frequency distribution as well as its sampling properties. Highlights include its importance in bibliometrics, tables for the sampling distribution of the maximal value of a finite Zipf distribution, and an approximation formula for confidence intervals. (Author/LRW)

  20. Regression modeling of particle size distributions in urban storm water: advancements through improved sample collection methods

    USGS Publications Warehouse

    Fienen, Michael N.; Selbig, William R.

    2012-01-01

    A new sample collection system was developed to improve the representation of sediment entrained in urban storm water by integrating water quality samples from the entire water column. The depth-integrated sampler arm (DISA) was able to mitigate sediment stratification bias in storm water, thereby improving the characterization of suspended-sediment concentration and particle size distribution at three independent study locations. Use of the DISA decreased variability, which improved statistical regression to predict particle size distribution using surrogate environmental parameters, such as precipitation depth and intensity. The performance of this statistical modeling technique was compared to results using traditional fixed-point sampling methods and was found to perform better. When environmental parameters can be used to predict particle size distributions, environmental managers have more options when characterizing concentrations, loads, and particle size distributions in urban runoff.

  1. Effect of sample size in the evaluation of "in-field" sampling plans for aflatoxin B(1) determination in corn.

    PubMed

    Brera, Carlo; De Santis, Barbara; Prantera, Elisabetta; Debegnach, Francesca; Pannunzi, Elena; Fasano, Floriana; Berdini, Clara; Slate, Andrew B; Miraglia, Marina; Whitaker, Thomas B

    2010-08-11

    Use of proper sampling methods throughout the agri-food chain is crucial when it comes to effectively detecting contaminants in foods and feeds. The objective of the study was to estimate the performance of sampling plan designs to determine aflatoxin B(1) (AFB(1)) contamination in corn fields. A total of 840 ears were selected from a corn field suspected of being contaminated with aflatoxin. The mean and variance among the aflatoxin values for each ear were 10.6 mug/kg and 2233.3, respectively. The variability and confidence intervals associated with sample means of a given size could be predicted using an equation associated with the normal distribution. Sample sizes of 248 and 674 ears would be required to estimate the true field concentration of 10.6 mug/kg within +/-50 and +/-30%, respectively. Using the distribution information from the study, operating characteristic curves were developed to show the performance of various sampling plan designs. PMID:20608734

  2. EFFECTS OF SAMPLE SIZE ON THE STRESS-PERMEABILITY RELATIONSHIP FOR NATURAL FRACTURES

    SciTech Connect

    Gale, J. E.; Raven, K. G.

    1980-10-01

    Five granite cores (10.0, 15.0, 19.3, 24.5, and 29.4 cm in diameter) containing natural fractures oriented normal to the core axis, were used to study the effect of sample size on the permeability of natural fractures. Each sample, taken from the same fractured plane, was subjected to three uniaxial compressive loading and unloading cycles with a maximum axial stress of 30 MPa. For each loading and unloading cycle, the flowrate through the fracture plane from a central borehole under constant (±2% of the pressure increment) injection pressures was measured at specified increments of effective normal stress. Both fracture deformation and flowrate exhibited highly nonlinear variation with changes in normal stress. Both fracture deformation and flowrate hysteresis between loading and unloading cycles were observed for all samples, but this hysteresis decreased with successive loading cycles. The results of this study suggest that a sample-size effect exists. Fracture deformation and flowrate data indicate that crushing of the fracture plane asperities occurs in the smaller samples because of a poorer initial distribution of contact points than in the larger samples, which deform more elastically. Steady-state flow tests also suggest a decrease in minimum fracture permeability at maximum normal stress with increasing sample size for four of the five samples. Regression analyses of the flowrate and fracture closure data suggest that deformable natural fractures deviate from the cubic relationship between fracture aperture and flowrate and that this is especially true for low flowrates and small apertures, when the fracture sides are in intimate contact under high normal stress conditions, In order to confirm the trends suggested in this study, it is necessary to quantify the scale and variation of fracture plane roughness and to determine, from additional laboratory studies, the degree of variation in the stress-permeability relationship between samples of the same

  3. The impact of particle size selective sampling methods on occupational assessment of airborne beryllium particulates.

    PubMed

    Sleeth, Darrah K

    2013-05-01

    In 2010, the American Conference of Governmental Industrial Hygienists (ACGIH) formally changed its Threshold Limit Value (TLV) for beryllium from a 'total' particulate sample to an inhalable particulate sample. This change may have important implications for workplace air sampling of beryllium. A history of particle size-selective sampling methods, with a special focus on beryllium, will be provided. The current state of the science on inhalable sampling will also be presented, including a look to the future at what new methods or technology may be on the horizon. This includes new sampling criteria focused on particle deposition in the lung, proposed changes to the existing inhalable convention, as well as how the issues facing beryllium sampling may help drive other changes in sampling technology.

  4. Reduced sample sizes for atrophy outcomes in Alzheimer's disease trials: baseline adjustment.

    PubMed

    Schott, J M; Bartlett, J W; Barnes, J; Leung, K K; Ourselin, S; Fox, N C

    2010-08-01

    Cerebral atrophy rate is increasingly used as an outcome measure for Alzheimer's disease (AD) trials. We used the Alzheimer's disease Neuroimaging initiative (ADNI) dataset to assess if adjusting for baseline characteristics can reduce sample sizes. Controls (n = 199), patients with mild cognitive impairment (MCI) (n = 334) and AD (n = 144) had two MRI scans, 1-year apart; approximately 55% had baseline CSF tau, p-tau, and Abeta1-42. Whole brain (KN-BSI) and hippocampal (HMAPS-HBSI) atrophy rate, and ventricular expansion (VBSI) were calculated for each group; numbers required to power a placebo-controlled trial were estimated. Sample sizes per arm (80% power, 25% absolute rate reduction) for AD were (95% CI): brain atrophy = 81 (64,109), hippocampal atrophy = 88 (68,119), ventricular expansion = 118 (92,157); and for MCI: brain atrophy = 149 (122,188), hippocampal atrophy = 201 (160,262), ventricular expansion = 234 (191,295). To detect a 25% reduction relative to normal aging required increased sample sizes approximately 3-fold (AD), and approximately 5-fold (MCI). Disease severity and Abeta1-42 contributed significantly to atrophy rate variability. Adjusting for 11 predefined covariates reduced sample sizes by up to 30%. Treatment trials in AD should consider the effects of normal aging; adjusting for baseline characteristics can significantly reduce required sample sizes.

  5. Demonstration of multi- and single-reader sample size program for diagnostic studies software

    NASA Astrophysics Data System (ADS)

    Hillis, Stephen L.; Schartz, Kevin M.

    2015-03-01

    The recently released software Multi- and Single-Reader Sample Size Sample Size Program for Diagnostic Studies, written by Kevin Schartz and Stephen Hillis, performs sample size computations for diagnostic reader-performance studies. The program computes the sample size needed to detect a specified difference in a reader performance measure between two modalities, when using the analysis methods initially proposed by Dorfman, Berbaum, and Metz (DBM) and Obuchowski and Rockette (OR), and later unified and improved by Hillis and colleagues. A commonly used reader performance measure is the area under the receiver-operating-characteristic curve. The program can be used with typical common reader-performance measures which can be estimated parametrically or nonparametrically. The program has an easy-to-use step-by-step intuitive interface that walks the user through the entry of the needed information. Features of the software include the following: (1) choice of several study designs; (2) choice of inputs obtained from either OR or DBM analyses; (3) choice of three different inference situations: both readers and cases random, readers fixed and cases random, and readers random and cases fixed; (4) choice of two types of hypotheses: equivalence or noninferiority; (6) choice of two output formats: power for specified case and reader sample sizes, or a listing of case-reader combinations that provide a specified power; (7) choice of single or multi-reader analyses; and (8) functionality in Windows, Mac OS, and Linux.

  6. Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use

    USGS Publications Warehouse

    Arthur, Steve M.; Schwartz, Charles C.

    1999-01-01

    We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area <1%/additional location) and precise (CV < 50%). Although the radiotracking data appeared unbiased, except for the relationship between area and sample size, these data failed to indicate some areas that likely were important to bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the

  7. Threshold-dependent sample sizes for selenium assessment with stream fish tissue

    USGS Publications Warehouse

    Hitt, Nathaniel P.; Smith, David

    2013-01-01

    Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4-8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and type-I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of 8 fish could detect an increase of ∼ 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of ∼ 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2 this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of ∼ 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated by increased precision of composites for estimating mean

  8. Threshold-dependent sample sizes for selenium assessment with stream fish tissue.

    PubMed

    Hitt, Nathaniel P; Smith, David R

    2015-01-01

    Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α=0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased precision of composites

  9. [On the impact of sample size calculation and power in clinical research].

    PubMed

    Held, Ulrike

    2014-10-01

    The aim of a clinical trial is to judge the efficacy of a new therapy or drug. In the planning phase of the study, the calculation of the necessary sample size is crucial in order to obtain a meaningful result. The study design, the expected treatment effect in outcome and its variability, power and level of significance are factors which determine the sample size. It is often difficult to fix these parameters prior to the start of the study, but related papers from the literature can be helpful sources for the unknown quantities. For scientific as well as ethical reasons it is necessary to calculate the sample size in advance in order to be able to answer the study question. PMID:25270749

  10. The influence of virtual sample size on confidence and causal-strength judgments.

    PubMed

    Liljeholm, Mimi; Cheng, Patricia W

    2009-01-01

    The authors investigated whether confidence in causal judgments varies with virtual sample size--the frequency of cases in which the outcome is (a) absent before the introduction of a generative cause or (b) present before the introduction of a preventive cause. Participants were asked to evaluate the influence of various candidate causes on an outcome as well as to rate their confidence in those judgments. They were presented with information on the relative frequencies of the outcome given the presence and absence of various candidate causes. These relative frequencies, sample size, and the direction of the causal influence (generative vs. preventive) were manipulated. It was found that both virtual and actual sample size affected confidence. Further, confidence affected estimates of strength, but confidence and strength are dissociable. The results enable a consistent explanation of the puzzling previous finding that observed causal-strength ratings often deviated from the predictions of both of the 2 dominant models of causal strength.

  11. Information-based sample size re-estimation in group sequential design for longitudinal trials.

    PubMed

    Zhou, Jing; Adewale, Adeniyi; Shentu, Yue; Liu, Jiajun; Anderson, Keaven

    2014-09-28

    Group sequential design has become more popular in clinical trials because it allows for trials to stop early for futility or efficacy to save time and resources. However, this approach is less well-known for longitudinal analysis. We have observed repeated cases of studies with longitudinal data where there is an interest in early stopping for a lack of treatment effect or in adapting sample size to correct for inappropriate variance assumptions. We propose an information-based group sequential design as a method to deal with both of these issues. Updating the sample size at each interim analysis makes it possible to maintain the target power while controlling the type I error rate. We will illustrate our strategy with examples and simulations and compare the results with those obtained using fixed design and group sequential design without sample size re-estimation.

  12. Design and sample-size considerations in the detection of linkage disequilibrium with a disease locus

    SciTech Connect

    Olson, J.M.; Wijsman, E.M.

    1994-09-01

    The presence of linkage disequilibrium between closely linked loci can aid in the fine mapping of disease loci. The authors investigate the power of several designs for sampling individuals with different disease genotypes. As expected, haplotype data provide the greatest power for detecting disequilibrium, but, in the absence of parental information to resolve the phase of double heterozygotes, the most powerful design samples only individuals homozygous at the trait locus. For rare diseases, such a scheme is generally not feasible, and the authors also provide power and sample-size calculations for designs that sample heterozygotes. The results provide information useful in planning disequilibrium studies. 17 refs., 3 figs., 4 tabs.

  13. Mesh-size effects on drift sample composition as determined with a triple net sampler

    USGS Publications Warehouse

    Slack, K.V.; Tilley, L.J.; Kennelly, S.S.

    1991-01-01

    Nested nets of three different mesh apertures were used to study mesh-size effects on drift collected in a small mountain stream. The innermost, middle, and outermost nets had, respectively, 425 ??m, 209 ??m and 106 ??m openings, a design that reduced clogging while partitioning collections into three size groups. The open area of mesh in each net, from largest to smallest mesh opening, was 3.7, 5.7 and 8.0 times the area of the net mouth. Volumes of filtered water were determined with a flowmeter. The results are expressed as (1) drift retained by each net, (2) drift that would have been collected by a single net of given mesh size, and (3) the percentage of total drift (the sum of the catches from all three nets) that passed through the 425 ??m and 209 ??m nets. During a two day period in August 1986, Chironomidae larvae were dominant numerically in all 209 ??m and 106 ??m samples and midday 425 ??m samples. Large drifters (Ephemerellidae) occurred only in 425 ??m or 209 ??m nets, but the general pattern was an increase in abundance and number of taxa with decreasing mesh size. Relatively more individuals occurred in the larger mesh nets at night than during the day. The two larger mesh sizes retained 70% of the total sediment/detritus in the drift collections, and this decreased the rate of clogging of the 106 ??m net. If an objective of a sampling program is to compare drift density or drift rate between areas or sampling dates, the same mesh size should be used for all sample collection and processing. The mesh aperture used for drift collection should retain all species and life stages of significance in a study. The nested net design enables an investigator to test the adequacy of drift samples. ?? 1991 Kluwer Academic Publishers.

  14. Power and Sample Size for Randomized Phase III Survival Trials under the Weibull Model

    PubMed Central

    Wu, Jianrong

    2015-01-01

    Two parametric tests are proposed for designing randomized two-arm phase III survival trials under the Weibull model. The properties of the two parametric tests are compared with the non-parametric log-rank test through simulation studies. Power and sample size formulas of the two parametric tests are derived. The impact on sample size under mis-specification of the Weibull shape parameter is also investigated. The study can be designed by planning the study duration and handling nonuniform entry and loss to follow-up under the Weibull model using either the proposed parametric tests or the well known non-parametric log-rank test. PMID:24895942

  15. A simple formula for the calculation of sample size in pilot studies.

    PubMed

    Viechtbauer, Wolfgang; Smits, Luc; Kotz, Daniel; Budé, Luc; Spigt, Mark; Serroyen, Jan; Crutzen, Rik

    2015-11-01

    One of the goals of a pilot study is to identify unforeseen problems, such as ambiguous inclusion or exclusion criteria or misinterpretations of questionnaire items. Although sample size calculation methods for pilot studies have been proposed, none of them are directed at the goal of problem detection. In this article, we present a simple formula to calculate the sample size needed to be able to identify, with a chosen level of confidence, problems that may arise with a given probability. If a problem exists with 5% probability in a potential study participant, the problem will almost certainly be identified (with 95% confidence) in a pilot study including 59 participants. PMID:26146089

  16. Miniaturization in voltammetry: ultratrace element analysis and speciation with twenty-fold sample size reduction.

    PubMed

    Monticelli, D; Laglera, L M; Caprara, S

    2014-10-01

    Voltammetric techniques have emerged as powerful methods for the determination and speciation of trace and ultratrace elements without any preconcentration in several research fields. Nevertheless, large sample volumes are typically required (10 mL), which strongly limits their application and/or the precision of the results. In this work, we report a 20-fold reduction in sample size for trace and ultratrace elemental determination and speciation by conventional voltammetric instrumentation, introducing the lowest amount of sample (0.5 mL) in which ultratrace detection has been performed up to now. This goal was achieved by a careful design of a new sample holder. Reliable, validated results were obtained for the determination of trace/ultratrace elements in rainwater (Cd, Co, Cu, Ni, Pb) and seawater (Cu). Moreover, copper speciation in seawater samples was consistently determined by competitive ligand equilibration-cathodic stripping voltammetry (CLE-CSV). The proposed apparatus showed several advantages: (1) 20-fold reduction in sample volume (the sample size is lowered from 120 to 6 mL for the CLE-CSV procedure); (2) decrease in analysis time due to the reduction in purging time up to 2.5 fold; (3) 20-fold drop in reagent consumption. Moreover, the analytical performances were not affected: similar detection capabilities, precision and accuracy were obtained. Application to sample of limited availability (e.g. porewaters, snow, rainwater, open ocean water, biological samples) and to the description of high resolution temporal trends may be easily foreseen.

  17. Simulation analyses of space use: Home range estimates, variability, and sample size

    USGS Publications Warehouse

    Bekoff, M.; Mech, L.D.

    1984-01-01

    Simulations of space use by animals were run to determine the relationship among home range area estimates, variability, and sample size {number of locations}. As sample size increased, home range size increased asymptotically, whereas variability decreased among mean home range area estimates generated by multiple simulations for the same sample size. Our results suggest that field workers should ascertain between 100 and 200 locations in order to estimate reliably home range area. In some cases, this suggested guideline is higher than values found in the few published studies in which the relationship between home range area and number of locations is addressed. Sampling differences for small species occupying relatively small home ranges indicate that fewer locations may be sufficient to allow for a reliable estimate of home range. Intraspecffic variability in social status (group member, loner, resident, transient), age, sex, reproductive condition, and food resources also have to be considered, as do season, habitat, and differences in sampling and analytical methods. Comparative data still are needed.

  18. Sample Size Considerations in Prevention Research Applications of Multilevel Modeling and Structural Equation Modeling.

    PubMed

    Hoyle, Rick H; Gottfredson, Nisha C

    2015-10-01

    When the goal of prevention research is to capture in statistical models some measure of the dynamic complexity in structures and processes implicated in problem behavior and its prevention, approaches such as multilevel modeling (MLM) and structural equation modeling (SEM) are indicated. Yet the assumptions that must be satisfied if these approaches are to be used responsibly raise concerns regarding their use in prevention research involving smaller samples. In this article, we discuss in nontechnical terms the role of sample size in MLM and SEM and present findings from the latest simulation work on the performance of each approach at sample sizes typical of prevention research. For each statistical approach, we draw from extant simulation studies to establish lower bounds for sample size (e.g., MLM can be applied with as few as ten groups comprising ten members with normally distributed data, restricted maximum likelihood estimation, and a focus on fixed effects; sample sizes as small as N = 50 can produce reliable SEM results with normally distributed data and at least three reliable indicators per factor) and suggest strategies for making the best use of the modeling approach when N is near the lower bound.

  19. Sample size determination for testing equality in a cluster randomized trial with noncompliance.

    PubMed

    Lui, Kung-Jong; Chang, Kuang-Chao

    2011-01-01

    For administrative convenience or cost efficiency, we may often employ a cluster randomized trial (CRT), in which randomized units are clusters of patients rather than individual patients. Furthermore, because of ethical reasons or patient's decision, it is not uncommon to encounter data in which there are patients not complying with their assigned treatments. Thus, the development of a sample size calculation procedure for a CRT with noncompliance is important and useful in practice. Under the exclusion restriction model, we have developed an asymptotic test procedure using a tanh(-1)(x) transformation for testing equality between two treatments among compliers for a CRT with noncompliance. We have further derived a sample size formula accounting for both noncompliance and the intraclass correlation for a desired power 1 - β at a nominal α level. We have employed Monte Carlo simulation to evaluate the finite-sample performance of the proposed test procedure with respect to type I error and the accuracy of the derived sample size calculation formula with respect to power in a variety of situations. Finally, we use the data taken from a CRT studying vitamin A supplementation to reduce mortality among preschool children to illustrate the use of sample size calculation proposed here. PMID:21191850

  20. Big data and large sample size: a cautionary note on the potential for bias.

    PubMed

    Kaplan, Robert M; Chambers, David A; Glasgow, Russell E

    2014-08-01

    A number of commentaries have suggested that large studies are more reliable than smaller studies and there is a growing interest in the analysis of "big data" that integrates information from many thousands of persons and/or different data sources. We consider a variety of biases that are likely in the era of big data, including sampling error, measurement error, multiple comparisons errors, aggregation error, and errors associated with the systematic exclusion of information. Using examples from epidemiology, health services research, studies on determinants of health, and clinical trials, we conclude that it is necessary to exercise greater caution to be sure that big sample size does not lead to big inferential errors. Despite the advantages of big studies, large sample size can magnify the bias associated with error resulting from sampling or study design.

  1. Estimating variable effective population sizes from multiple genomes: a sequentially markov conditional sampling distribution approach.

    PubMed

    Sheehan, Sara; Harris, Kelley; Song, Yun S

    2013-07-01

    Throughout history, the population size of modern humans has varied considerably due to changes in environment, culture, and technology. More accurate estimates of population size changes, and when they occurred, should provide a clearer picture of human colonization history and help remove confounding effects from natural selection inference. Demography influences the pattern of genetic variation in a population, and thus genomic data of multiple individuals sampled from one or more present-day populations contain valuable information about the past demographic history. Recently, Li and Durbin developed a coalescent-based hidden Markov model, called the pairwise sequentially Markovian coalescent (PSMC), for a pair of chromosomes (or one diploid individual) to estimate past population sizes. This is an efficient, useful approach, but its accuracy in the very recent past is hampered by the fact that, because of the small sample size, only few coalescence events occur in that period. Multiple genomes from the same population contain more information about the recent past, but are also more computationally challenging to study jointly in a coalescent framework. Here, we present a new coalescent-based method that can efficiently infer population size changes from multiple genomes, providing access to a new store of information about the recent past. Our work generalizes the recently developed sequentially Markov conditional sampling distribution framework, which provides an accurate approximation of the probability of observing a newly sampled haplotype given a set of previously sampled haplotypes. Simulation results demonstrate that we can accurately reconstruct the true population histories, with a significant improvement over the PSMC in the recent past. We apply our method, called diCal, to the genomes of multiple human individuals of European and African ancestry to obtain a detailed population size change history during recent times.

  2. Estimating Variable Effective Population Sizes from Multiple Genomes: A Sequentially Markov Conditional Sampling Distribution Approach

    PubMed Central

    Sheehan, Sara; Harris, Kelley; Song, Yun S.

    2013-01-01

    Throughout history, the population size of modern humans has varied considerably due to changes in environment, culture, and technology. More accurate estimates of population size changes, and when they occurred, should provide a clearer picture of human colonization history and help remove confounding effects from natural selection inference. Demography influences the pattern of genetic variation in a population, and thus genomic data of multiple individuals sampled from one or more present-day populations contain valuable information about the past demographic history. Recently, Li and Durbin developed a coalescent-based hidden Markov model, called the pairwise sequentially Markovian coalescent (PSMC), for a pair of chromosomes (or one diploid individual) to estimate past population sizes. This is an efficient, useful approach, but its accuracy in the very recent past is hampered by the fact that, because of the small sample size, only few coalescence events occur in that period. Multiple genomes from the same population contain more information about the recent past, but are also more computationally challenging to study jointly in a coalescent framework. Here, we present a new coalescent-based method that can efficiently infer population size changes from multiple genomes, providing access to a new store of information about the recent past. Our work generalizes the recently developed sequentially Markov conditional sampling distribution framework, which provides an accurate approximation of the probability of observing a newly sampled haplotype given a set of previously sampled haplotypes. Simulation results demonstrate that we can accurately reconstruct the true population histories, with a significant improvement over the PSMC in the recent past. We apply our method, called diCal, to the genomes of multiple human individuals of European and African ancestry to obtain a detailed population size change history during recent times. PMID:23608192

  3. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    PubMed

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination.

  4. Size-dependent Turbidimatric Quantification of Mobile Colloids in Field Samples

    NASA Astrophysics Data System (ADS)

    Yan, J.; Meng, X.; Jin, Y.

    2015-12-01

    Natural colloids, often defined as entities with sizes < 1.0 μm, have attracted much research attention because of their ability to facilitate the transport of contaminants in the subsurface environment. However, due to their small size and generally low concentrations in field samples, quantification of mobile colloids, especially the smaller fractions (< 0.45 µm), which are operationally defined as dissolved, is largely impeded and hence the natural colloidal pool is greatly overlooked and underestimated. The main objectives of this study are to: (1) develop an experimentally and economically efficient methodology to quantify natural colloids in different size fractions (0.1-0.45 and 0.45-1 µm); (2) quantify mobile colloids including small colloids, < 0.45 µm particularly, in different natural aquatic samples. We measured and generated correlations between mass concentration and turbidity of colloid suspensions, made by extracting and fractionating water dispersible colloids in 37 soils from different areas in the U.S. and Denmark, for colloid size fractions 0.1-0.45 and 0.45-1 µm. Results show that the correlation between turbidity and colloid mass concentration is largely affected by colloid size and iron content, indicating the need to generate different correlations for colloids with constrained size range and iron content. This method enabled quick quantification of colloid concentrations in a large number of field samples collected from freshwater, wetland and estuaries in different size fractions. As a general trend, we observed high concentrations of colloids in the < 0.45 µm fraction, which constitutes a significant percentage of the total mobile colloidal pool (< 1 µm). This observation suggests that the operationally defined cut-off size for "dissolved" phase can greatly underestimate colloid concentration therefore the role that colloids play in the transport of associated contaminants or other elements.

  5. Sample-Size Effects on the Compression Behavior of a Ni-BASED Amorphous Alloy

    NASA Astrophysics Data System (ADS)

    Liang, Weizhong; Zhao, Guogang; Wu, Linzhi; Yu, Hongjun; Li, Ming; Zhang, Lin

    Ni42Cu5Ti20Zr21.5Al8Si3.5 bulk metallic glasses rods with diameters of 1 mm and 3 mm, were prepared by arc melting of composing elements in a Ti-gettered argon atmosphere. The compressive deformation and fracture behavior of the amorphous alloy samples with different size were investigated by testing machine and scanning electron microscope. The compressive stress-strain curves of 1 mm and 3 mm samples exhibited 4.5% and 0% plastic strain, while the compressive fracture strength for 1 mm and 3 mm rod is 4691 MPa and 2631 MPa, respectively. The compressive fracture surface of different size sample consisted of shear zone and non-shear one. Typical vein patterns with some melting drops can be seen on the shear region of 1 mm rod, while fish-bone shape patterns can be observed on 3 mm specimen surface. Some interesting different spacing periodic ripples existed on the non-shear zone of 1 and 3 mm rods. On the side surface of 1 mm sample, high density of shear bands was observed. The skip of shear bands can be seen on 1 mm sample surface. The mechanisms of the effect of sample size on fracture strength and plasticity of the Ni-based amorphous alloy are discussed.

  6. Measurements of size-segregated emission particles by a sampling system based on the cascade impactor

    SciTech Connect

    Janja Tursic; Irena Grgic; Axel Berner; Jaroslav Skantar; Igor Cuhalev

    2008-02-01

    A special sampling system for measurements of size-segregated particles directly at the source of emission was designed and constructed. The central part of this system is a low-pressure cascade impactor with 10 collection stages for the size ranges between 15 nm and 16 {mu}m. Its capability and suitability was proven by sampling particles at the stack (100{sup o}C) of a coal-fired power station in Slovenia. These measurements showed very reasonable results in comparison with a commercial cascade impactor for PM10 and PM2.5 and with a plane device for total suspended particulate matter (TSP). The best agreement with the measurements made by a commercial impactor was found for concentrations of TSP above 10 mg m{sup -3}, i.e., the average PM2.5/PM10 ratios obtained by a commercial impactor and by our impactor were 0.78 and 0.80, respectively. Analysis of selected elements in size-segregated emission particles additionally confirmed the suitability of our system. The measurements showed that the mass size distributions were generally bimodal, with the most pronounced mass peak in the 1-2 {mu}m size range. The first results of elemental mass size distributions showed some distinctive differences in comparison to the most common ambient anthropogenic sources (i.e., traffic emissions). For example, trace elements, like Pb, Cd, As, and V, typically related to traffic emissions, are usually more abundant in particles less than 1 {mu}m in size, whereas in our specific case they were found at about 2 {mu}m. Thus, these mass size distributions can be used as a signature of this source. Simultaneous measurements of size-segregated particles at the source and in the surrounding environment can therefore significantly increase the sensitivity of the contribution of a specific source to the actual ambient concentrations. 25 refs., 3 figs., 2 tabs.

  7. Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas

    2014-01-01

    Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…

  8. Got Power? A Systematic Review of Sample Size Adequacy in Health Professions Education Research

    ERIC Educational Resources Information Center

    Cook, David A.; Hatala, Rose

    2015-01-01

    Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011,…

  9. Power and Sample Size Calculations for Multivariate Linear Models with Random Explanatory Variables

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2005-01-01

    This article considers the problem of power and sample size calculations for normal outcomes within the framework of multivariate linear models. The emphasis is placed on the practical situation that not only the values of response variables for each subject are just available after the observations are made, but also the levels of explanatory…

  10. A Unified Approach to Power Calculation and Sample Size Determination for Random Regression Models

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2007-01-01

    The underlying statistical models for multiple regression analysis are typically attributed to two types of modeling: fixed and random. The procedures for calculating power and sample size under the fixed regression models are well known. However, the literature on random regression models is limited and has been confined to the case of all…

  11. Using Structural Equation Modeling to Assess Functional Connectivity in the Brain: Power and Sample Size Considerations

    ERIC Educational Resources Information Center

    Sideridis, Georgios; Simos, Panagiotis; Papanicolaou, Andrew; Fletcher, Jack

    2014-01-01

    The present study assessed the impact of sample size on the power and fit of structural equation modeling applied to functional brain connectivity hypotheses. The data consisted of time-constrained minimum norm estimates of regional brain activity during performance of a reading task obtained with magnetoencephalography. Power analysis was first…

  12. Seasonal and Particle Size-Dependent Variations of Hexabromocyclododecanes in Settled Dust: Implications for Sampling.

    PubMed

    Cao, Zhiguo; Xu, Fuchao; Li, Wenchao; Sun, Jianhui; Shen, Mohai; Su, Xianfa; Feng, Jinglan; Yu, Gang; Covaci, Adrian

    2015-09-15

    Particle size is a significant parameter which determines the environmental fate and the behavior of dust particles and, implicitly, the exposure risk of humans to particle-bound contaminants. Currently, the influence of dust particle size on the occurrence and seasonal variation of hexabromocyclododecanes (HBCDs) remains unclear. While HBCDs are now restricted by the Stockholm Convention, information regarding HBCD contamination in indoor dust in China is still limited. We analyzed composite dust samples from offices (n = 22), hotels (n = 3), kindergartens (n = 2), dormitories (n = 40), and main roads (n = 10). Each composite dust sample (one per type of microenvironment) was fractionated into 9 fractions (F1-F9: 2000-900, 900-500, 500-400, 400-300, 300-200, 200-100, 100-74, 74-50, and <50 μm). Total HBCD concentrations ranged from 5.3 (road dust, F4) to 2580 ng g(-1) (dormitory dust, F4) in the 45 size-segregated samples. The seasonality of HBCDs in indoor dust was investigated in 40 samples from two offices. A consistent seasonal trend of HBCD levels was evident with dust collected in the winter being more contaminated with HBCDs than dust from the summer. Particle size-selection strategy for dust analysis has been found to be influential on the HBCD concentrations, while overestimation or underestimation would occur with improper strategies. PMID:26301772

  13. The Relation among Fit Indexes, Power, and Sample Size in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Kim, Kevin H.

    2005-01-01

    The relation among fit indexes, power, and sample size in structural equation modeling is examined. The noncentrality parameter is required to compute power. The 2 existing methods of computing power have estimated the noncentrality parameter by specifying an alternative hypothesis or alternative fit. These methods cannot be implemented easily and…

  14. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    ERIC Educational Resources Information Center

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  15. Sample Size Planning for the Standardized Mean Difference: Accuracy in Parameter Estimation via Narrow Confidence Intervals

    ERIC Educational Resources Information Center

    Kelley, Ken; Rausch, Joseph R.

    2006-01-01

    Methods for planning sample size (SS) for the standardized mean difference so that a narrow confidence interval (CI) can be obtained via the accuracy in parameter estimation (AIPE) approach are developed. One method plans SS so that the expected width of the CI is sufficiently narrow. A modification adjusts the SS so that the obtained CI is no…

  16. On efficient two-stage adaptive designs for clinical trials with sample size adjustment.

    PubMed

    Liu, Qing; Li, Gang; Anderson, Keaven M; Lim, Pilar

    2012-01-01

    Group sequential designs are rarely used for clinical trials with substantial over running due to fast enrollment or long duration of treatment and follow-up. Traditionally, such trials rely on fixed sample size designs. Recently, various two-stage adaptive designs have been introduced to allow sample size adjustment to increase statistical power or avoid unnecessarily large trials. However, these adaptive designs can be seriously inefficient. To address this infamous problem, we propose a likelihood-based two-stage adaptive design where sample size adjustment is derived from a pseudo group sequential design using cumulative conditional power. We show through numerical examples that this design cannot be improved by group sequential designs. In addition, the approach may uniformly improve any existing two-stage adaptive designs with sample size adjustment. For statistical inference, we provide methods for sequential p-values and confidence intervals, as well as median unbiased and minimum variance unbiased estimates. We show that the claim of inefficiency of adaptive designs by Tsiatis and Mehta ( 2003 ) is logically flawed, and thereby provide a strong defense of Cui et al. ( 1999 ). PMID:22651105

  17. Sample Size Requirements in Single- and Multiphase Growth Mixture Models: A Monte Carlo Simulation Study

    ERIC Educational Resources Information Center

    Kim, Su-Young

    2012-01-01

    Just as growth mixture models are useful with single-phase longitudinal data, multiphase growth mixture models can be used with multiple-phase longitudinal data. One of the practically important issues in single- and multiphase growth mixture models is the sample size requirements for accurate estimation. In a Monte Carlo simulation study, the…

  18. Sample size planning for the coefficient of variation from the accuracy in parameter estimation approach.

    PubMed

    Kelley, Ken

    2007-11-01

    The accuracy in parameter estimation approach to sample size planning is developed for the coefficient of variation, where the goal of the method is to obtain an accurate parameter estimate by achieving a sufficiently narrow confidence interval. The first method allows researchers to plan sample size so that the expected width of the confidence interval for the population coefficient of variation is sufficiently narrow. A modification allows a desired degree of assurance to be incorporated into the method, so that the obtained confidence interval will be sufficiently narrow with some specified probability (e.g., 85% assurance that the 95 confidence interval width will be no wider than to units). Tables of necessary sample size are provided for a variety of scenarios that may help researchers planning a study where the coefficient of variation is of interest plan an appropriate sample size in order to have a sufficiently narrow confidence interval, optionally with somespecified assurance of the confidence interval being sufficiently narrow. Freely available computer routines have been developed that allow researchers to easily implement all of the methods discussed in the article.

  19. The Influence of Virtual Sample Size on Confidence and Causal-Strength Judgments

    ERIC Educational Resources Information Center

    Liljeholm, Mimi; Cheng, Patricia W.

    2009-01-01

    The authors investigated whether confidence in causal judgments varies with virtual sample size--the frequency of cases in which the outcome is (a) absent before the introduction of a generative cause or (b) present before the introduction of a preventive cause. Participants were asked to evaluate the influence of various candidate causes on an…

  20. Sample size calculations for clinical trials targeting tauopathies: A new potential disease target

    PubMed Central

    Whitwell, Jennifer L.; Duffy, Joseph R.; Strand, Edythe A.; Machulda, Mary M.; Tosakulwong, Nirubol; Weigand, Stephen D.; Senjem, Matthew L.; Spychalla, Anthony J.; Gunter, Jeffrey L.; Petersen, Ronald C.; Jack, Clifford R.; Josephs, Keith A.

    2015-01-01

    Disease-modifying therapies are being developed to target tau pathology, and should, therefore, be tested in primary tauopathies. We propose that progressive apraxia of speech should be considered one such target group. In this study, we investigate potential neuroimaging and clinical outcome measures for progressive apraxia of speech and determine sample size estimates for clinical trials. We prospectively recruited 24 patients with progressive apraxia of speech who underwent two serial MRI with an interval of approximately two years. Detailed speech and language assessments included the Apraxia of Speech Rating Scale (ASRS) and Motor Speech Disorders (MSD) severity scale. Rates of ventricular expansion and rates of whole brain, striatal and midbrain atrophy were calculated. Atrophy rates across 38 cortical regions were also calculated and the regions that best differentiated patients from controls were selected. Sample size estimates required to power placebo-controlled treatment trials were calculated. The smallest sample size estimates were obtained with rates of atrophy of the precentral gyrus and supplementary motor area, with both measures requiring less than 50 subjects per arm to detect a 25% treatment effect with 80% power. These measures outperformed the other regional and global MRI measures and the clinical scales. Regional rates of cortical atrophy therefore provide the best outcome measures in progressive apraxia of speech. The small sample size estimates demonstrate feasibility for including progressive apraxia of speech in future clinical treatment trials targeting tau. PMID:26076744

  1. Size Distributions and Characterization of Native and Ground Samples for Toxicology Studies

    NASA Technical Reports Server (NTRS)

    McKay, David S.; Cooper, Bonnie L.; Taylor, Larry A.

    2010-01-01

    This slide presentation shows charts and graphs that review the particle size distribution and characterization of natural and ground samples for toxicology studies. There are graphs which show the volume distribution versus the number distribution for natural occurring dust, jet mill ground dust, and ball mill ground dust.

  2. The effects of sampling and internal noise on the representation of ensemble average size.

    PubMed

    Im, Hee Yeon; Halberda, Justin

    2013-02-01

    Increasing numbers of studies have explored human observers' ability to rapidly extract statistical descriptions from collections of similar items (e.g., the average size and orientation of a group of tilted Gabor patches). Determining whether these descriptions are generated by mechanisms that are independent from object-based sampling procedures requires that we investigate how internal noise, external noise, and sampling affect subjects' performance. Here we systematically manipulated the external variability of ensembles and used variance summation modeling to estimate both the internal noise and the number of samples that affected the representation of ensemble average size. The results suggest that humans sample many more than one or two items from an array when forming an estimate of the average size, and that the internal noise that affects ensemble processing is lower than the noise that affects the processing of single objects. These results are discussed in light of other recent modeling efforts and suggest that ensemble processing of average size relies on a mechanism that is distinct from segmenting individual items. This ensemble process may be more similar to texture processing.

  3. One-Sided Nonparametric Comparison of Treatments with a Standard for Unequal Sample Sizes.

    ERIC Educational Resources Information Center

    Chakraborti, S.; Gibbons, Jean D.

    1992-01-01

    The one-sided problem of comparing treatments with a standard on the basis of data available in the context of a one-way analysis of variance is examined, and the methodology of S. Chakraborti and J. D. Gibbons (1991) is extended to the case of unequal sample sizes. (SLD)

  4. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    ERIC Educational Resources Information Center

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  5. Support vector regression to predict porosity and permeability: Effect of sample size

    NASA Astrophysics Data System (ADS)

    Al-Anazi, A. F.; Gates, I. D.

    2012-02-01

    Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function

  6. Forestry inventory based on multistage sampling with probability proportional to size

    NASA Technical Reports Server (NTRS)

    Lee, D. C. L.; Hernandez, P., Jr.; Shimabukuro, Y. E.

    1983-01-01

    A multistage sampling technique, with probability proportional to size, is developed for a forest volume inventory using remote sensing data. The LANDSAT data, Panchromatic aerial photographs, and field data are collected. Based on age and homogeneity, pine and eucalyptus classes are identified. Selection of tertiary sampling units is made through aerial photographs to minimize field work. The sampling errors for eucalyptus and pine ranged from 8.34 to 21.89 percent and from 7.18 to 8.60 percent, respectively.

  7. Sample sizes for brain atrophy outcomes in trials for secondary progressive multiple sclerosis

    PubMed Central

    Altmann, D R.; Jasperse, B; Barkhof, F; Beckmann, K; Filippi, M; Kappos, L D.; Molyneux, P; Polman, C H.; Pozzilli, C; Thompson, A J.; Wagner, K; Yousry, T A.; Miller, D H.

    2009-01-01

    Background: Progressive brain atrophy in multiple sclerosis (MS) may reflect neuroaxonal and myelin loss and MRI measures of brain tissue loss are used as outcome measures in MS treatment trials. This study investigated sample sizes required to demonstrate reduction of brain atrophy using three outcome measures in a parallel group, placebo-controlled trial for secondary progressive MS (SPMS). Methods: Data were taken from a cohort of 43 patients with SPMS who had been followed up with 6-monthly T1-weighted MRI for up to 3 years within the placebo arm of a therapeutic trial. Central cerebral volumes (CCVs) were measured using a semiautomated segmentation approach, and brain volume normalized for skull size (NBV) was measured using automated segmentation (SIENAX). Change in CCV and NBV was measured by subtraction of baseline from serial CCV and SIENAX images; in addition, percentage brain volume change relative to baseline was measured directly using a registration-based method (SIENA). Sample sizes for given treatment effects and power were calculated for standard analyses using parameters estimated from the sample. Results: For a 2-year trial duration, minimum sample sizes per arm required to detect a 50% treatment effect at 80% power were 32 for SIENA, 69 for CCV, and 273 for SIENAX. Two-year minimum sample sizes were smaller than 1-year by 71% for SIENAX, 55% for CCV, and 44% for SIENA. Conclusion: SIENA and central cerebral volume are feasible outcome measures for inclusion in placebo-controlled trials in secondary progressive multiple sclerosis. GLOSSARY ANCOVA = analysis of covariance; CCV = central cerebral volume; FSL = FMRIB Software Library; MNI = Montreal Neurological Institute; MS = multiple sclerosis; NBV = normalized brain volume; PBVC = percent brain volume change; RRMS = relapsing–remitting multiple sclerosis; SPMS = secondary progressive multiple sclerosis. PMID:19005170

  8. Gutenberg-Richter b-value maximum likelihood estimation and sample size

    NASA Astrophysics Data System (ADS)

    Nava, F. A.; Márquez-Ramírez, V. H.; Zúñiga, F. R.; Ávila-Barrientos, L.; Quinteros, C. B.

    2016-06-01

    The Aki-Utsu maximum likelihood method is widely used for estimation of the Gutenberg-Richter b-value, but not all authors are conscious of the method's limitations and implicit requirements. The Aki/Utsu method requires a representative estimate of the population mean magnitude; a requirement seldom satisfied in b-value studies, particularly in those that use data from small geographic and/or time windows, such as b-mapping and b-vs-time studies. Monte Carlo simulation methods are used to determine how large a sample is necessary to achieve representativity, particularly for rounded magnitudes. The size of a representative sample weakly depends on the actual b-value. It is shown that, for commonly used precisions, small samples give meaningless estimations of b. Our results give estimates on the probabilities of getting correct estimates of b for a given desired precision for samples of different sizes. We submit that all published studies reporting b-value estimations should include information about the size of the samples used.

  9. 10Be measurements at MALT using reduced-size samples of bulk sediments

    NASA Astrophysics Data System (ADS)

    Horiuchi, Kazuho; Oniyanagi, Itsumi; Wasada, Hiroshi; Matsuzaki, Hiroyuki

    2013-01-01

    In order to establish 10Be measurements on reduced-size (1-10 mg) samples of bulk sediments, we investigated four different pretreatment designs using lacustrine and marginal-sea sediments and the AMS system of the Micro Analysis Laboratory, Tandem accelerator (MALT) at The University of Tokyo. The 10Be concentrations obtained from the samples of 1-10 mg agreed within a precision of 3-5% with the values previously determined using corresponding ordinary-size (∼200 mg) samples and the same AMS system. This fact demonstrates reliable determinations of 10Be with milligram levels of recent bulk sediments at MALT. On the other hand, a clear decline of the BeO- beam with tens of micrograms of 9Be carrier suggests that the combination of ten milligrams of sediments and a few hundred micrograms of the 9Be carrier is more convenient at this stage.

  10. Size selective isocyanate aerosols personal air sampling using porous plastic foams

    NASA Astrophysics Data System (ADS)

    Khanh Huynh, Cong; Duc, Trinh Vu

    2009-02-01

    As part of a European project (SMT4-CT96-2137), various European institutions specialized in occupational hygiene (BGIA, HSL, IOM, INRS, IST, Ambiente e Lavoro) have established a program of scientific collaboration to develop one or more prototypes of European personal samplers for the collection of simultaneous three dust fractions: inhalable, thoracic and respirable. These samplers based on existing sampling heads (IOM, GSP and cassettes) use Polyurethane Plastic Foam (PUF) according to their porosity to support sampling and separator size of the particles. In this study, the authors present an original application of size selective personal air sampling using chemical impregnated PUF to perform isocyanate aerosols capturing and derivatizing in industrial spray-painting shops.

  11. Estimating the Correlation in Bivariate Normal Data with Known Variances and Small Sample Sizes1

    PubMed Central

    Fosdick, Bailey K.; Raftery, Adrian E.

    2013-01-01

    We consider the problem of estimating the correlation in bivariate normal data when the means and variances are assumed known, with emphasis on the small sample case. We consider eight different estimators, several of them considered here for the first time in the literature. In a simulation study, we found that Bayesian estimators using the uniform and arc-sine priors outperformed several empirical and exact or approximate maximum likelihood estimators in small samples. The arc-sine prior did better for large values of the correlation. For testing whether the correlation is zero, we found that Bayesian hypothesis tests outperformed significance tests based on the empirical and exact or approximate maximum likelihood estimators considered in small samples, but that all tests performed similarly for sample size 50. These results lead us to suggest using the posterior mean with the arc-sine prior to estimate the correlation in small samples when the variances are assumed known. PMID:23378667

  12. Power analysis and sample size estimation for RNA-Seq differential expression

    PubMed Central

    Ching, Travers; Huang, Sijia

    2014-01-01

    It is crucial for researchers to optimize RNA-seq experimental designs for differential expression detection. Currently, the field lacks general methods to estimate power and sample size for RNA-Seq in complex experimental designs, under the assumption of the negative binomial distribution. We simulate RNA-Seq count data based on parameters estimated from six widely different public data sets (including cell line comparison, tissue comparison, and cancer data sets) and calculate the statistical power in paired and unpaired sample experiments. We comprehensively compare five differential expression analysis packages (DESeq, edgeR, DESeq2, sSeq, and EBSeq) and evaluate their performance by power, receiver operator characteristic (ROC) curves, and other metrics including areas under the curve (AUC), Matthews correlation coefficient (MCC), and F-measures. DESeq2 and edgeR tend to give the best performance in general. Increasing sample size or sequencing depth increases power; however, increasing sample size is more potent than sequencing depth to increase power, especially when the sequencing depth reaches 20 million reads. Long intergenic noncoding RNAs (lincRNA) yields lower power relative to the protein coding mRNAs, given their lower expression level in the same RNA-Seq experiment. On the other hand, paired-sample RNA-Seq significantly enhances the statistical power, confirming the importance of considering the multifactor experimental design. Finally, a local optimal power is achievable for a given budget constraint, and the dominant contributing factor is sample size rather than the sequencing depth. In conclusion, we provide a power analysis tool (http://www2.hawaii.edu/~lgarmire/RNASeqPowerCalculator.htm) that captures the dispersion in the data and can serve as a practical reference under the budget constraint of RNA-Seq experiments. PMID:25246651

  13. Power and sample size estimation for epigenome-wide association scans to detect differential DNA methylation

    PubMed Central

    Tsai, Pei-Chien; Bell, Jordana T

    2015-01-01

    Background: Epigenome-wide association scans (EWAS) are under way for many complex human traits, but EWAS power has not been fully assessed. We investigate power of EWAS to detect differential methylation using case-control and disease-discordant monozygotic (MZ) twin designs with genome-wide DNA methylation arrays. Methods and Results: We performed simulations to estimate power under the case-control and discordant MZ twin EWAS study designs, under a range of epigenetic risk effect sizes and conditions. For example, to detect a 10% mean methylation difference between affected and unaffected subjects at a genome-wide significance threshold of P = 1 × 10−6, 98 MZ twin pairs were required to reach 80% EWAS power, and 112 cases and 112 controls pairs were needed in the case-control design. We also estimated the minimum sample size required to reach 80% EWAS power under both study designs. Our analyses highlighted several factors that significantly influenced EWAS power, including sample size, epigenetic risk effect size, the variance of DNA methylation at the locus of interest and the correlation in DNA methylation patterns within the twin sample. Conclusions: We provide power estimates for array-based DNA methylation EWAS under case-control and disease-discordant MZ twin designs, and explore multiple factors that impact on EWAS power. Our results can help guide EWAS experimental design and interpretation for future epigenetic studies. PMID:25972603

  14. Wind tunnel calibration of the USGS dust deposition sampler: Sampling efficiency and grain size correction

    NASA Astrophysics Data System (ADS)

    Goossens, Dirk

    2010-11-01

    Wind tunnel experiments were conducted with the USGS (United States Geological Survey) dust deposition sampler to test its efficiency for dust deposition and its capacity to collect representative samples for grain size analysis. Efficiency for dust deposition was ascertained relative to a water surface, which was considered the best alternative for simulating a perfectly absorbent surface. Capacity to collect representative samples for grain size analysis was ascertained by comparing the grain size distribution of the collected dust to that of the original dust. Three versions were tested: an empty sampler, a sampler filled with glass marbles, and a sampler filled with water. Efficiencies and capacity to collect representative samples were ascertained for five wind velocities (range: 1-5 m s -1) and seven grain size classes (range: 10-80 μm). All samplers showed a rapid drop in collection efficiency with increasing wind speed. Efficiencies are low, in the order of 10% or less for most common wind speeds over the continents. Efficiency also drops as the particles become coarser. Adding glass marbles to the sampler increases its efficiency, protects the settled dust from resuspension, and minimizes outsplash during rainfall. The sediment collected by the sampler is finer than the original dust. The bias in the grain size is more expressed in fine particle fractions than in coarse particle fractions. The performance of the USGS sampler is rather low when compared to other dust deposition samplers, but a procedure is provided that allows calculation of the original grain size distribution and dust deposition quantities.

  15. Sample size allocation for food item radiation monitoring and safety inspection.

    PubMed

    Seto, Mayumi; Uriu, Koichiro

    2015-03-01

    The objective of this study is to identify a procedure for determining sample size allocation for food radiation inspections of more than one food item to minimize the potential risk to consumers of internal radiation exposure. We consider a simplified case of food radiation monitoring and safety inspection in which a risk manager is required to monitor two food items, milk and spinach, in a contaminated area. Three protocols for food radiation monitoring with different sample size allocations were assessed by simulating random sampling and inspections of milk and spinach in a conceptual monitoring site. Distributions of (131)I and radiocesium concentrations were determined in reference to (131)I and radiocesium concentrations detected in Fukushima prefecture, Japan, for March and April 2011. The results of the simulations suggested that a protocol that allocates sample size to milk and spinach based on the estimation of (131)I and radiocesium concentrations using the apparent decay rate constants sequentially calculated from past monitoring data can most effectively minimize the potential risks of internal radiation exposure.

  16. Estimating the Size of Populations at High Risk for HIV Using Respondent-Driven Sampling Data

    PubMed Central

    Handcock, Mark S.; Gile, Krista J.; Mar, Corinne M.

    2015-01-01

    Summary The study of hard-to-reach populations presents significant challenges. Typically, a sampling frame is not available, and population members are difficult to identify or recruit from broader sampling frames. This is especially true of populations at high risk for HIV/AIDS. Respondent-driven sampling (RDS) is often used in such settings with the primary goal of estimating the prevalence of infection. In such populations, the number of people at risk for infection and the number of people infected are of fundamental importance. This article presents a case-study of the estimation of the size of the hard-to-reach population based on data collected through RDS. We study two populations of female sex workers and men-who-have-sex-with-men in El Salvador. The approach is Bayesian and we consider different forms of prior information, including using the UNAIDS population size guidelines for this region. We show that the method is able to quantify the amount of information on population size available in RDS samples. As separate validation, we compare our results to those estimated by extrapolating from a capture–recapture study of El Salvadorian cities. The results of our case-study are largely comparable to those of the capture–recapture study when they differ from the UNAIDS guidelines. Our method is widely applicable to data from RDS studies and we provide a software package to facilitate this. PMID:25585794

  17. "PowerUp"!: A Tool for Calculating Minimum Detectable Effect Sizes and Minimum Required Sample Sizes for Experimental and Quasi-Experimental Design Studies

    ERIC Educational Resources Information Center

    Dong, Nianbo; Maynard, Rebecca

    2013-01-01

    This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…

  18. A Novel Size-Selective Airborne Particle Sampling Instrument (Wras) for Health Risk Evaluation

    NASA Astrophysics Data System (ADS)

    Gnewuch, H.; Muir, R.; Gorbunov, B.; Priest, N. D.; Jackson, P. R.

    Health risks associated with inhalation of airborne particles are known to be influenced by particle sizes. A reliable, size resolving sampler, classifying particles in size ranges from 2 nm—30 μm and suitable for use in the field would be beneficial in investigating health risks associated with inhalation of airborne particles. A review of current aerosol samplers highlighted a number of limitations. These could be overcome by combining an inertial deposition impactor with a diffusion collector in a single device. The instrument was designed for analysing mass size distributions. Calibration was carried out using a number of recognised techniques. The instrument was tested in the field by collecting size resolved samples of lead containing aerosols present at workplaces in factories producing crystal glass. The mass deposited on each substrate proved sufficient to be detected and measured using atomic absorption spectroscopy. Mass size distributions of lead were produced and the proportion of lead present in the aerosol nanofraction calculated and varied from 10% to 70% by weight.

  19. Multiple Approaches to Down Sizing of the Lunar Sample Return Collection

    NASA Technical Reports Server (NTRS)

    Lofgren, Gary E.; Horz, F.

    2010-01-01

    Future Lunar missions are planned for at least 7 days, significantly longer than the 3 days of the later Apollo missions. The last of those missions, A-17, returned 111 kg of samples plus another 20 kg of containers. The current Constellation program requirements for return weight for science is 100 kg with the hope of raising that limit to near 250 kg including containers and other non-geological materials. The estimated return weight for rock and soil samples will, at best, be about 175 kg. One method proposed to accomplish down-sizing of the collection is the use of a Geo-Lab in the lunar habitat to complete a preliminary examination of selected samples and facilitate prioritizing the return samples.

  20. Estimating the effect of recurrent infectious diseases on nutritional status: sampling frequency, sample-size, and bias.

    PubMed

    Schmidt, Wolf-Peter; Genser, Bernd; Luby, Stephen P; Chalabi, Zaid

    2011-08-01

    There is an ongoing interest in studying the effect of common recurrent infections and conditions, such as diarrhoea, respiratory infections, and fever, on the nutritional status of children at risk of malnutrition. Epidemiological studies exploring this association need to measure infections with sufficient accuracy to minimize bias in the effect estimates. A versatile model of common recurrent infections was used for exploring how many repeated measurements of disease are required to maximize the power and logistical efficiency of studies investigating the effect of infectious diseases on malnutrition without compromising the validity of the estimates. Depending on the prevalence and distribution of disease within a population, 15-30 repeat measurements per child over one year should be sufficient to provide unbiased estimates of the association between infections and nutritional status. Less-frequent measurements lead to a bias in the effect size towards zero, especially if disease is rare. In contrast, recall error can lead to exaggerated effect sizes. Recall periods of three days or shorter may be preferable compared to longer recall periods. The results showed that accurate estimation of the association between recurrent infections and nutritional status required closer follow-up of study participants than studies using recurrent infections as an outcome measure. The findings of the study provide guidance for choosing an appropriate sampling strategy to explore this association.

  1. Sample size calculation for the Wilcoxon-Mann-Whitney test adjusting for ties.

    PubMed

    Zhao, Yan D; Rahardja, Dewi; Qu, Yongming

    2008-02-10

    In this paper we study sample size calculation methods for the asymptotic Wilcoxon-Mann-Whitney test for data with or without ties. The existing methods are applicable either to data with ties or to data without ties but not to both cases. While the existing methods developed for data without ties perform well, the methods developed for data with ties have limitations in that they are either applicable to proportional odds alternatives or have computational difficulties. We propose a new method which has a closed-form formula and therefore is very easy to calculate. In addition, the new method can be applied to both data with or without ties. Simulations have demonstrated that the new sample size formula performs very well as the corresponding actual powers are close to the nominal powers.

  2. Presentation of the intrasubject coefficient of variation for sample size planning in bioequivalence studies.

    PubMed

    Hauschke, D; Steinijans, W V; Diletti, E; Schall, R; Luus, H G; Elze, M; Blume, H

    1994-07-01

    Bioequivalence studies are generally performed as crossover studies and, therefore, information on the intrasubject coefficient of variation is needed for sample size planning. Unfortunately, this information is usually not presented in publications on bioequivalence studies, and only the pooled inter- and intrasubject coefficient of variation for either test or reference formulation is reported. Thus, the essential information for sample size planning of future studies is not made available to other researchers. In order to overcome such shortcomings, the presentation of results from bioequivalence studies should routinely include the intrasubject coefficient of variation. For the relevant coefficients of variation, theoretical background together with modes of calculation and presentation are given in this communication with particular emphasis on the multiplicative model.

  3. Sample size calculation for the Wilcoxon-Mann-Whitney test adjusting for ties.

    PubMed

    Zhao, Yan D; Rahardja, Dewi; Qu, Yongming

    2008-02-10

    In this paper we study sample size calculation methods for the asymptotic Wilcoxon-Mann-Whitney test for data with or without ties. The existing methods are applicable either to data with ties or to data without ties but not to both cases. While the existing methods developed for data without ties perform well, the methods developed for data with ties have limitations in that they are either applicable to proportional odds alternatives or have computational difficulties. We propose a new method which has a closed-form formula and therefore is very easy to calculate. In addition, the new method can be applied to both data with or without ties. Simulations have demonstrated that the new sample size formula performs very well as the corresponding actual powers are close to the nominal powers. PMID:17487941

  4. Statistical power calculation and sample size determination for environmental studies with data below detection limits

    NASA Astrophysics Data System (ADS)

    Shao, Quanxi; Wang, You-Gan

    2009-09-01

    Power calculation and sample size determination are critical in designing environmental monitoring programs. The traditional approach based on comparing the mean values may become statistically inappropriate and even invalid when substantial proportions of the response values are below the detection limits or censored because strong distributional assumptions have to be made on the censored observations when implementing the traditional procedures. In this paper, we propose a quantile methodology that is robust to outliers and can also handle data with a substantial proportion of below-detection-limit observations without the need of imputing the censored values. As a demonstration, we applied the methods to a nutrient monitoring project, which is a part of the Perth Long-Term Ocean Outlet Monitoring Program. In this example, the sample size required by our quantile methodology is, in fact, smaller than that by the traditional t-test, illustrating the merit of our method.

  5. On the validity of the Poisson assumption in sampling nanometer-sized aerosols

    SciTech Connect

    Damit, Brian E; Wu, Dr. Chang-Yu; Cheng, Mengdawn

    2014-01-01

    A Poisson process is traditionally believed to apply to the sampling of aerosols. For a constant aerosol concentration, it is assumed that a Poisson process describes the fluctuation in the measured concentration because aerosols are stochastically distributed in space. Recent studies, however, have shown that sampling of micrometer-sized aerosols has non-Poissonian behavior with positive correlations. The validity of the Poisson assumption for nanometer-sized aerosols has not been examined and thus was tested in this study. Its validity was tested for four particle sizes - 10 nm, 25 nm, 50 nm and 100 nm - by sampling from indoor air with a DMA- CPC setup to obtain a time series of particle counts. Five metrics were calculated from the data: pair-correlation function (PCF), time-averaged PCF, coefficient of variation, probability of measuring a concentration at least 25% greater than average, and posterior distributions from Bayesian inference. To identify departures from Poissonian behavior, these metrics were also calculated for 1,000 computer-generated Poisson time series with the same mean as the experimental data. For nearly all comparisons, the experimental data fell within the range of 80% of the Poisson-simulation values. Essentially, the metrics for the experimental data were indistinguishable from a simulated Poisson process. The greater influence of Brownian motion for nanometer-sized aerosols may explain the Poissonian behavior observed for smaller aerosols. Although the Poisson assumption was found to be valid in this study, it must be carefully applied as the results here do not definitively prove applicability in all sampling situations.

  6. Transition Densities and Sample Frequency Spectra of Diffusion Processes with Selection and Variable Population Size

    PubMed Central

    Živković, Daniel; Steinrücken, Matthias; Song, Yun S.; Stephan, Wolfgang

    2015-01-01

    Advances in empirical population genetics have made apparent the need for models that simultaneously account for selection and demography. To address this need, we here study the Wright–Fisher diffusion under selection and variable effective population size. In the case of genic selection and piecewise-constant effective population sizes, we obtain the transition density by extending a recently developed method for computing an accurate spectral representation for a constant population size. Utilizing this extension, we show how to compute the sample frequency spectrum in the presence of genic selection and an arbitrary number of instantaneous changes in the effective population size. We also develop an alternate, efficient algorithm for computing the sample frequency spectrum using a moment-based approach. We apply these methods to answer the following questions: If neutrality is incorrectly assumed when there is selection, what effects does it have on demographic parameter estimation? Can the impact of negative selection be observed in populations that undergo strong exponential growth? PMID:25873633

  7. Response characteristics of laser diffraction particle size analyzers - Optical sample volume extent and lens effects

    NASA Technical Reports Server (NTRS)

    Hirleman, E. D.; Oechsle, V.; Chigier, N. A.

    1984-01-01

    The response characteristics of laser diffraction particle sizing instruments were studied theoretically and experimentally. In particular, the extent of optical sample volume and the effects of receiving lens properties were investigated in detail. The experimental work was performed with a particle size analyzer using a calibration reticle containing a two-dimensional array of opaque circular disks on a glass substrate. The calibration slide simulated the forward-scattering characteristics of a Rosin-Rammler droplet size distribution. The reticle was analyzed with collection lenses of 63 mm, 100 mm, and 300 mm focal lengths using scattering inversion software that determined best-fit Rosin-Rammler size distribution parameters. The data differed from the predicted response for the reticle by about 10 percent. A set of calibration factor for the detector elements was determined that corrected for the nonideal response of the instrument. The response of the instrument was also measured as a function of reticle position, and the results confirmed a theoretical optical sample volume model presented here.

  8. A contemporary decennial global Landsat sample of changing agricultural field sizes

    NASA Astrophysics Data System (ADS)

    White, Emma; Roy, David

    2014-05-01

    Agriculture has caused significant human induced Land Cover Land Use (LCLU) change, with dramatic cropland expansion in the last century and significant increases in productivity over the past few decades. Satellite data have been used for agricultural applications including cropland distribution mapping, crop condition monitoring, crop production assessment and yield prediction. Satellite based agricultural applications are less reliable when the sensor spatial resolution is small relative to the field size. However, to date, studies of agricultural field size distributions and their change have been limited, even though this information is needed to inform the design of agricultural satellite monitoring systems. Moreover, the size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLU change. In many parts of the world field sizes may have increased. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, and impacts on the diffusion of herbicides, pesticides, disease pathogens, and pests. The Landsat series of satellites provide the longest record of global land observations, with 30m observations available since 1982. Landsat data are used to examine contemporary field size changes in a period (1980 to 2010) when significant global agricultural changes have occurred. A multi-scale sampling approach is used to locate global hotspots of field size change by examination of a recent global agricultural yield map and literature review. Nine hotspots are selected where significant field size change is apparent and where change has been driven by technological advancements (Argentina and U.S.), abrupt societal changes (Albania and Zimbabwe), government land use and agricultural policy changes (China, Malaysia, Brazil), and/or constrained by

  9. Type-II generalized family-wise error rate formulas with application to sample size determination.

    PubMed

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26914402

  10. Sample Size Requirements for Estimation of Item Parameters in the Multidimensional Graded Response Model.

    PubMed

    Jiang, Shengyu; Wang, Chun; Weiss, David J

    2016-01-01

    Likert types of rating scales in which a respondent chooses a response from an ordered set of response options are used to measure a wide variety of psychological, educational, and medical outcome variables. The most appropriate item response theory model for analyzing and scoring these instruments when they provide scores on multiple scales is the multidimensional graded response model (MGRM) A simulation study was conducted to investigate the variables that might affect item parameter recovery for the MGRM. Data were generated based on different sample sizes, test lengths, and scale intercorrelations. Parameter estimates were obtained through the flexMIRT software. The quality of parameter recovery was assessed by the correlation between true and estimated parameters as well as bias and root-mean-square-error. Results indicated that for the vast majority of cases studied a sample size of N = 500 provided accurate parameter estimates, except for tests with 240 items when 1000 examinees were necessary to obtain accurate parameter estimates. Increasing sample size beyond N = 1000 did not increase the accuracy of MGRM parameter estimates. PMID:26903916

  11. Quantification of probe-sample interactions of a scanning thermal microscope using a nanofabricated calibration sample having programmable size.

    PubMed

    Ge, Yunfei; Zhang, Yuan; Booth, Jamie A; Weaver, Jonathan M R; Dobson, Phillip S

    2016-08-12

    We report a method for quantifying scanning thermal microscopy (SThM) probe-sample thermal interactions in air using a novel temperature calibration device. This new device has been designed, fabricated and characterised using SThM to provide an accurate and spatially variable temperature distribution that can be used as a temperature reference due to its unique design. The device was characterised by means of a microfabricated SThM probe operating in passive mode. This data was interpreted using a heat transfer model, built to describe the thermal interactions during a SThM thermal scan. This permitted the thermal contact resistance between the SThM tip and the device to be determined as 8.33 × 10(5) K W(-1). It also permitted the probe-sample contact radius to be clarified as being the same size as the probe's tip radius of curvature. Finally, the data were used in the construction of a lumped-system steady state model for the SThM probe and its potential applications were addressed. PMID:27363896

  12. Quantification of probe-sample interactions of a scanning thermal microscope using a nanofabricated calibration sample having programmable size

    NASA Astrophysics Data System (ADS)

    Ge, Yunfei; Zhang, Yuan; Booth, Jamie A.; Weaver, Jonathan M. R.; Dobson, Phillip S.

    2016-08-01

    We report a method for quantifying scanning thermal microscopy (SThM) probe-sample thermal interactions in air using a novel temperature calibration device. This new device has been designed, fabricated and characterised using SThM to provide an accurate and spatially variable temperature distribution that can be used as a temperature reference due to its unique design. The device was characterised by means of a microfabricated SThM probe operating in passive mode. This data was interpreted using a heat transfer model, built to describe the thermal interactions during a SThM thermal scan. This permitted the thermal contact resistance between the SThM tip and the device to be determined as 8.33 × 105 K W-1. It also permitted the probe-sample contact radius to be clarified as being the same size as the probe’s tip radius of curvature. Finally, the data were used in the construction of a lumped-system steady state model for the SThM probe and its potential applications were addressed.

  13. Quantification of probe–sample interactions of a scanning thermal microscope using a nanofabricated calibration sample having programmable size

    NASA Astrophysics Data System (ADS)

    Ge, Yunfei; Zhang, Yuan; Booth, Jamie A.; Weaver, Jonathan M. R.; Dobson, Phillip S.

    2016-08-01

    We report a method for quantifying scanning thermal microscopy (SThM) probe–sample thermal interactions in air using a novel temperature calibration device. This new device has been designed, fabricated and characterised using SThM to provide an accurate and spatially variable temperature distribution that can be used as a temperature reference due to its unique design. The device was characterised by means of a microfabricated SThM probe operating in passive mode. This data was interpreted using a heat transfer model, built to describe the thermal interactions during a SThM thermal scan. This permitted the thermal contact resistance between the SThM tip and the device to be determined as 8.33 × 105 K W‑1. It also permitted the probe–sample contact radius to be clarified as being the same size as the probe’s tip radius of curvature. Finally, the data were used in the construction of a lumped-system steady state model for the SThM probe and its potential applications were addressed.

  14. Sub-sampling genetic data to estimate black bear population size: A case study

    USGS Publications Warehouse

    Tredick, C.A.; Vaughan, M.R.; Stauffer, D.F.; Simek, S.L.; Eason, T.

    2007-01-01

    Costs for genetic analysis of hair samples collected for individual identification of bears average approximately US$50 [2004] per sample. This can easily exceed budgetary allowances for large-scale studies or studies of high-density bear populations. We used 2 genetic datasets from 2 areas in the southeastern United States to explore how reducing costs of analysis by sub-sampling affected precision and accuracy of resulting population estimates. We used several sub-sampling scenarios to create subsets of the full datasets and compared summary statistics, population estimates, and precision of estimates generated from these subsets to estimates generated from the complete datasets. Our results suggested that bias and precision of estimates improved as the proportion of total samples used increased, and heterogeneity models (e.g., Mh[CHAO]) were more robust to reduced sample sizes than other models (e.g., behavior models). We recommend that only high-quality samples (>5 hair follicles) be used when budgets are constrained, and efforts should be made to maximize capture and recapture rates in the field.

  15. Percolation segregation in multi-size and multi-component particulate mixtures: Measurement, sampling, and modeling

    NASA Astrophysics Data System (ADS)

    Jha, Anjani K.

    Particulate materials are routinely handled in large quantities by industries such as, agriculture, electronic, ceramic, chemical, cosmetic, fertilizer, food, nutraceutical, pharmaceutical, power, and powder metallurgy. These industries encounter segregation due to the difference in physical and mechanical properties of particulates. The general goal of this research was to study percolation segregation in multi-size and multi-component particulate mixtures, especially measurement, sampling, and modeling. A second generation primary segregation shear cell (PSSC-II), an industrial vibrator, a true cubical triaxial tester, and two samplers (triers) were used as primary test apparatuses for quantifying segregation and flowability; furthermore, to understand and propose strategies to mitigate segregation in particulates. Toward this end, percolation segregation in binary, ternary, and quaternary size mixtures for two particulate types: urea (spherical) and potash (angular) were studied. Three coarse size ranges 3,350-4,000 mum (mean size = 3,675 mum), 2,800-3,350 mum (3,075 mum), and 2,360-2,800 mum (2,580 mum) and three fines size ranges 2,000-2,360 mum (2,180 mum), 1,700-2,000 mum (1,850 mum), and 1,400-1,700 mum (1,550 mum) for angular-shaped and spherical-shaped were selected for tests. Since the fines size 1,550 mum of urea was not available in sufficient quantity; therefore, it was not included in tests. Percolation segregation in fertilizer bags was tested also at two vibration frequencies of 5 Hz and 7Hz. The segregation and flowability of binary mixtures of urea under three equilibrium relative humidities (40%, 50%, and 60%) were also tested. Furthermore, solid fertilizer sampling was performed to compare samples obtained from triers of opening widths 12.7 mm and 19.1 mm and to determine size segregation in blend fertilizers. Based on experimental results, the normalized segregation rate (NSR) of binary mixtures was dependent on size ratio, mixing ratio

  16. Sample size determination for a t test given a t value from a previous study: A FORTRAN 77 program.

    PubMed

    Gillett, R

    2001-11-01

    When uncertain about the magnitude of an effect, researchers commonly substitute in the standard sample-size-determination formula an estimate of effect size derived from a previous experiment. A problem with this approach is that the traditional sample-size-determination formula was not designed to deal with the uncertainty inherent in an effect-size estimate. Consequently, estimate-substitution in the traditional sample-size-determination formula can lead to a substantial loss of power. A method of sample-size determination designed to handle uncertainty in effect-size estimates is described. The procedure uses the t value and sample size from a previous study, which might be a pilot study or a related study in the same area, to establish a distribution of probable effect sizes. The sample size to be employed in the new study is that which supplies an expected power of the desired amount over the distribution of probable effect sizes. A FORTRAN 77 program is presented that permits swift calculation of sample size for a variety of t tests, including independent t tests, related t tests, t tests of correlation coefficients, and t tests of multiple regression b coefficients.

  17. Clinical and MRI activity as determinants of sample size for pediatric multiple sclerosis trials

    PubMed Central

    Verhey, Leonard H.; Signori, Alessio; Arnold, Douglas L.; Bar-Or, Amit; Sadovnick, A. Dessa; Marrie, Ruth Ann; Banwell, Brenda

    2013-01-01

    Objective: To estimate sample sizes for pediatric multiple sclerosis (MS) trials using new T2 lesion count, annualized relapse rate (ARR), and time to first relapse (TTFR) endpoints. Methods: Poisson and negative binomial models were fit to new T2 lesion and relapse count data, and negative binomial time-to-event and exponential models were fit to TTFR data of 42 children with MS enrolled in a national prospective cohort study. Simulations were performed by resampling from the best-fitting model of new T2 lesion count, number of relapses, or TTFR, under various assumptions of the effect size, trial duration, and model parameters. Results: Assuming a 50% reduction in new T2 lesions over 6 months, 90 patients/arm are required, whereas 165 patients/arm are required for a 40% treatment effect. Sample sizes for 2-year trials using relapse-related endpoints are lower than that for 1-year trials. For 2-year trials and a conservative assumption of overdispersion (ϑ), sample sizes range from 70 patients/arm (using ARR) to 105 patients/arm (TTFR) for a 50% reduction in relapses, and 230 patients/arm (ARR) to 365 patients/arm (TTFR) for a 30% relapse reduction. Assuming a less conservative ϑ, 2-year trials using ARR require 45 patients/arm (60 patients/arm for TTFR) for a 50% reduction in relapses and 145 patients/arm (200 patients/arm for TTFR) for a 30% reduction. Conclusion: Six-month phase II trials using new T2 lesion count as an endpoint are feasible in the pediatric MS population; however, trials powered on ARR or TTFR will need to be 2 years in duration and will require multicentered collaboration. PMID:23966255

  18. Forest inventory using multistage sampling with probability proportional to size. [Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Lee, D. C. L.; Hernandezfilho, P.; Shimabukuro, Y. E.; Deassis, O. R.; Demedeiros, J. S.

    1984-01-01

    A multistage sampling technique, with probability proportional to size, for forest volume inventory using remote sensing data is developed and evaluated. The study area is located in the Southeastern Brazil. The LANDSAT 4 digital data of the study area are used in the first stage for automatic classification of reforested areas. Four classes of pine and eucalypt with different tree volumes are classified utilizing a maximum likelihood classification algorithm. Color infrared aerial photographs are utilized in the second stage of sampling. In the third state (ground level) the time volume of each class is determined. The total time volume of each class is expanded through a statistical procedure taking into account all the three stages of sampling. This procedure results in an accurate time volume estimate with a smaller number of aerial photographs and reduced time in field work.

  19. Subspace Leakage Analysis and Improved DOA Estimation With Small Sample Size

    NASA Astrophysics Data System (ADS)

    Shaghaghi, Mahdi; Vorobyov, Sergiy A.

    2015-06-01

    Classical methods of DOA estimation such as the MUSIC algorithm are based on estimating the signal and noise subspaces from the sample covariance matrix. For a small number of samples, such methods are exposed to performance breakdown, as the sample covariance matrix can largely deviate from the true covariance matrix. In this paper, the problem of DOA estimation performance breakdown is investigated. We consider the structure of the sample covariance matrix and the dynamics of the root-MUSIC algorithm. The performance breakdown in the threshold region is associated with the subspace leakage where some portion of the true signal subspace resides in the estimated noise subspace. In this paper, the subspace leakage is theoretically derived. We also propose a two-step method which improves the performance by modifying the sample covariance matrix such that the amount of the subspace leakage is reduced. Furthermore, we introduce a phenomenon named as root-swap which occurs in the root-MUSIC algorithm in the low sample size region and degrades the performance of the DOA estimation. A new method is then proposed to alleviate this problem. Numerical examples and simulation results are given for uncorrelated and correlated sources to illustrate the improvement achieved by the proposed methods. Moreover, the proposed algorithms are combined with the pseudo-noise resampling method to further improve the performance.

  20. Particle size conditions water repellency in sand samples hydrophobized with stearic acid

    NASA Astrophysics Data System (ADS)

    González-Peñaloza, F. A.; Jordán, A.; Bellinfante, N.; Bárcenas-Moreno, G.; Mataix-Solera, J.; Granged, A. J. P.; Gil, J.; Zavala, L. M.

    2012-04-01

    The main objective of this research is to study the effects of particle size and soil moisture on water repellency (WR) from hydrophobized sand samples. Quartz sand samples were collected from the top 15 cm of sandy soils, homogenised and divided in different sieve fractions: 0.5 - 2 mm (coarse sand), 0.25 - 0.5 mm (medium sand), and 0.05 - 0.25 mm (fine sand). WR was artificially induced in sand samples using different concentrations of stearic acid (SA; 0.5, 1, 5, 10, 20 and 30 g kg-1). Sand samples were placed in Petri plates and moistened with distilled water until 10% water content in weight. After a period of 30 min, soil WR was determined using the water drop penetration time (WDPT) test. A set of sub-samples was placed in an oven (50 oC) during the experimental period, and the rest was left air-drying at standard laboratory conditions. Water repellent soil samples were used as control, and the same treatments were applied. WR was determined every 24 h. No changes in WR were observed after 6 days of treatment. As expected, air-dried fine sand samples showed WR increasing with SA concentration and decreasing with soil moisture. In contrast, oven-dried samples remained wettable at SA concentrations below 5 g kg-1. Fine sand oven-dried samples showed extreme WR after just one day of treatment, but air-dried samples did not show extreme repellency until three days after treatment. SA concentrations above 5 g kg-1 always induced extreme WR. Medium sand air-dried samples showed hydrophilic properties when moist and low SA concentration (£1 g kg-1), but strong to extreme WR was induced by higher SA concentrations. In the case of oven-dried samples, medium sand showed severe to extreme WR regardless of soil moisture. Coarse sand showed the longest WDPTs, independently of soil moisture content or SA concentration. This behaviour may be caused by super-hydrophobicity. Also, it is suggested that movements of sand particles during wetting, contribute to expose new

  1. Statistical characterization of a large geochemical database and effect of sample size

    USGS Publications Warehouse

    Zhang, C.; Manheim, F. T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.

  2. Approaches to Sample Size Determination for Multivariate Data: Applications to PCA and PLS-DA of Omics Data.

    PubMed

    Saccenti, Edoardo; Timmerman, Marieke E

    2016-08-01

    Sample size determination is a fundamental step in the design of experiments. Methods for sample size determination are abundant for univariate analysis methods, but scarce in the multivariate case. Omics data are multivariate in nature and are commonly investigated using multivariate statistical methods, such as principal component analysis (PCA) and partial least-squares discriminant analysis (PLS-DA). No simple approaches to sample size determination exist for PCA and PLS-DA. In this paper we will introduce important concepts and offer strategies for (minimally) required sample size estimation when planning experiments to be analyzed using PCA and/or PLS-DA. PMID:27322847

  3. Approaches to Sample Size Determination for Multivariate Data: Applications to PCA and PLS-DA of Omics Data.

    PubMed

    Saccenti, Edoardo; Timmerman, Marieke E

    2016-08-01

    Sample size determination is a fundamental step in the design of experiments. Methods for sample size determination are abundant for univariate analysis methods, but scarce in the multivariate case. Omics data are multivariate in nature and are commonly investigated using multivariate statistical methods, such as principal component analysis (PCA) and partial least-squares discriminant analysis (PLS-DA). No simple approaches to sample size determination exist for PCA and PLS-DA. In this paper we will introduce important concepts and offer strategies for (minimally) required sample size estimation when planning experiments to be analyzed using PCA and/or PLS-DA.

  4. Adjustable virtual pore-size filter for automated sample preparation using acoustic radiation force

    SciTech Connect

    Jung, B; Fisher, K; Ness, K; Rose, K; Mariella, R

    2008-05-22

    We present a rapid and robust size-based separation method for high throughput microfluidic devices using acoustic radiation force. We developed a finite element modeling tool to predict the two-dimensional acoustic radiation force field perpendicular to the flow direction in microfluidic devices. Here we compare the results from this model with experimental parametric studies including variations of the PZT driving frequencies and voltages as well as various particle sizes and compressidensities. These experimental parametric studies also provide insight into the development of an adjustable 'virtual' pore-size filter as well as optimal operating conditions for various microparticle sizes. We demonstrated the separation of Saccharomyces cerevisiae and MS2 bacteriophage using acoustic focusing. The acoustic radiation force did not affect the MS2 viruses, and their concentration profile remained unchanged. With optimized design of our microfluidic flow system we were able to achieve yields of > 90% for the MS2 with > 80% of the S. cerevisiae being removed in this continuous-flow sample preparation device.

  5. Enhanced Z-LDA for Small Sample Size Training in Brain-Computer Interface Systems

    PubMed Central

    Gao, Dongrui; Zhang, Rui; Liu, Tiejun; Li, Fali; Ma, Teng; Lv, Xulin; Li, Peiyang; Yao, Dezhong; Xu, Peng

    2015-01-01

    Background. Usually the training set of online brain-computer interface (BCI) experiment is small. For the small training set, it lacks enough information to deeply train the classifier, resulting in the poor classification performance during online testing. Methods. In this paper, on the basis of Z-LDA, we further calculate the classification probability of Z-LDA and then use it to select the reliable samples from the testing set to enlarge the training set, aiming to mine the additional information from testing set to adjust the biased classification boundary obtained from the small training set. The proposed approach is an extension of previous Z-LDA and is named enhanced Z-LDA (EZ-LDA). Results. We evaluated the classification performance of LDA, Z-LDA, and EZ-LDA on simulation and real BCI datasets with different sizes of training samples, and classification results showed EZ-LDA achieved the best classification performance. Conclusions. EZ-LDA is promising to deal with the small sample size training problem usually existing in online BCI system. PMID:26550023

  6. Sample size slippages in randomised trials: exclusions and the lost and wayward.

    PubMed

    Schulz, Kenneth F; Grimes, David A

    2002-03-01

    Proper randomisation means little if investigators cannot include all randomised participants in the primary analysis. Participants might ignore follow-up, leave town, or take aspartame when instructed to take aspirin. Exclusions before randomisation do not bias the treatment comparison, but they can hurt generalisability. Eligibility criteria for a trial should be clear, specific, and applied before randomisation. Readers should assess whether any of the criteria make the trial sample atypical or unrepresentative of the people in which they are interested. In principle, assessment of exclusions after randomisation is simple: none are allowed. For the primary analysis, all participants enrolled should be included and analysed as part of the original group assigned (an intent-to-treat analysis). In reality, however, losses frequently occur. Investigators should, therefore, commit adequate resources to develop and implement procedures to maximise retention of participants. Moreover, researchers should provide clear, explicit information on the progress of all randomised participants through the trial by use of, for instance, a trial profile. Investigators can also do secondary analyses on, for instance, per-protocol or as-treated participants. Such analyses should be described as secondary and non-randomised comparisons. Mishandling of exclusions causes serious methodological difficulties. Unfortunately, some explanations for mishandling exclusions intuitively appeal to readers, disguising the seriousness of the issues. Creative mismanagement of exclusions can undermine trial validity.

  7. An In Situ Method for Sizing Insoluble Residues in Precipitation and Other Aqueous Samples

    PubMed Central

    Axson, Jessica L.; Creamean, Jessie M.; Bondy, Amy L.; Capracotta, Sonja S.; Warner, Katy Y.; Ault, Andrew P.

    2015-01-01

    Particles are frequently incorporated into clouds or precipitation, influencing climate by acting as cloud condensation or ice nuclei, taking up coatings during cloud processing, and removing species through wet deposition. Many of these particles, particularly ice nuclei, can remain suspended within cloud droplets/crystals as insoluble residues. While previous studies have measured the soluble or bulk mass of species within clouds and precipitation, no studies to date have determined the number concentration and size distribution of insoluble residues in precipitation or cloud water using in situ methods. Herein, for the first time we demonstrate that Nanoparticle Tracking Analysis (NTA) is a powerful in situ method for determining the total number concentration, number size distribution, and surface area distribution of insoluble residues in precipitation, both of rain and melted snow. The method uses 500 μL or less of liquid sample and does not require sample modification. Number concentrations for the insoluble residues in aqueous precipitation samples ranged from 2.0–3.0(±0.3)×108 particles cm−3, while surface area ranged from 1.8(±0.7)–3.2(±1.0)×107 μm2 cm−3. Number size distributions peaked between 133–150 nm, with both single and multi-modal character, while surface area distributions peaked between 173–270 nm. Comparison with electron microscopy of particles up to 10 μm show that, by number, > 97% residues are <1 μm in diameter, the upper limit of the NTA. The range of concentration and distribution properties indicates that insoluble residue properties vary with ambient aerosol concentrations, cloud microphysics, and meteorological dynamics. NTA has great potential for studying the role that insoluble residues play in critical atmospheric processes. PMID:25705069

  8. Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA

    PubMed Central

    Kelly, Brendan J.; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D.; Collman, Ronald G.; Bushman, Frederic D.; Li, Hongzhe

    2015-01-01

    Motivation: The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence–absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. Results: We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. Availability and implementation: http://github.com/brendankelly/micropower. Contact: brendank@mail.med.upenn.edu or hongzhe@upenn.edu PMID:25819674

  9. Unbiased Comparison of Sample Size Estimates From Longitudinal Structural Measures in ADNI

    PubMed Central

    Holland, Dominic; McEvoy, Linda K.; Dale, Anders M.

    2013-01-01

    Structural changes in neuroanatomical subregions can be measured using serial magnetic resonance imaging scans, and provide powerful biomarkers for detecting and monitoring Alzheimer's disease. The Alzheimer's Disease Neuroimaging Initiative (ADNI) has made a large database of longitudinal scans available, with one of its primary goals being to explore the utility of structural change measures for assessing treatment effects in clinical trials of putative disease-modifying therapies. Several ADNI-funded research laboratories have calculated such measures from the ADNI database and made their results publicly available. Here, using sample size estimates, we present a comparative analysis of the overall results that come from the application of each laboratory's extensive processing stream to the ADNI database. Obtaining accurate measures of change requires correcting for potential bias due to the measurement methods themselves; and obtaining realistic sample size estimates for treatment response, based on longitudinal imaging measures from natural history studies such as ADNI, requires calibrating measured change in patient cohorts with respect to longitudinal anatomical changes inherent to normal aging. We present results showing that significant longitudinal change is present in healthy control subjects who test negative for amyloid-β pathology. Therefore, sample size estimates as commonly reported from power calculations based on total structural change in patients, rather than change in patients relative to change in healthy controls, are likely to be unrealistically low for treatments targeting amyloid-related pathology. Of all the measures publicly available in ADNI, thinning of the entorhinal cortex quantified with the Quarc methodology provides the most powerful change biomarker. PMID:21830259

  10. Evaluation of the effects of anatomic location, histologic processing, and sample size on shrinkage of skin samples obtained from canine cadavers.

    PubMed

    Reagan, Jennifer K; Selmic, Laura E; Garrett, Laura D; Singh, Kuldeep

    2016-09-01

    OBJECTIVE To evaluate effects of anatomic location, histologic processing, and sample size on shrinkage of excised canine skin samples. SAMPLE Skin samples from 15 canine cadavers. PROCEDURES Elliptical samples of the skin, underlying subcutaneous fat, and muscle fascia were collected from the head, hind limb, and lumbar region of each cadaver. Two samples (10 mm and 30 mm) were collected at each anatomic location of each cadaver (one from the left side and the other from the right side). Measurements of length, width, depth, and surface area were collected prior to excision (P1) and after fixation in neutral-buffered 10% formalin for 24 to 48 hours (P2). Length and width were also measured after histologic processing (P3). RESULTS Length and width decreased significantly at all anatomic locations and for both sample sizes at each processing stage. Hind limb samples had the greatest decrease in length, compared with results for samples obtained from other locations, across all processing stages for both sample sizes. The 30-mm samples had a greater percentage change in length and width between P1 and P2 than did the 10-mm samples. Histologic processing (P2 to P3) had a greater effect on the percentage shrinkage of 10-mm samples. For all locations and both sample sizes, percentage change between P1 and P3 ranged from 24.0% to 37.7% for length and 18.0% to 22.8% for width. CONCLUSIONS AND CLINICAL RELEVANCE Histologic processing, anatomic location, and sample size affected the degree of shrinkage of a canine skin sample from excision to histologic assessment. PMID:27580116

  11. Particle size distribution of workplace aerosols in manganese alloy smelters applying a personal sampling strategy.

    PubMed

    Berlinger, B; Bugge, M D; Ulvestad, B; Kjuus, H; Kandler, K; Ellingsen, D G

    2015-12-01

    Air samples were collected by personal sampling with five stage Sioutas cascade impactors and respirable cyclones in parallel among tappers and crane operators in two manganese (Mn) alloy smelters in Norway to investigate PM fractions. The mass concentrations of PM collected by using the impactors and the respirable cyclones were critically evaluated by comparing the results of the parallel measurements. The geometric mean (GM) mass concentrations of the respirable fraction and the <10 μm PM fraction were 0.18 and 0.39 mg m(-3), respectively. Particle size distributions were determined using the impactor data in the range from 0 to 10 μm and by stationary measurements by using a scanning mobility particle sizer in the range from 10 to 487 nm. On average 50% of the particulate mass in the Mn alloy smelters was in the range from 2.5 to 10 μm, while the rest was distributed between the lower stages of the impactors. On average 15% of the particulate mass was found in the <0.25 μm PM fraction. The comparisons of the different PM fraction mass concentrations related to different work tasks or different workplaces, showed in many cases statistically significant differences, however, the particle size distribution of PM in the fraction <10 μm d(ae) was independent of the plant, furnace or work task. PMID:26498986

  12. Sample size considerations of prediction-validation methods in high-dimensional data for survival outcomes.

    PubMed

    Pang, Herbert; Jung, Sin-Ho

    2013-04-01

    A variety of prediction methods are used to relate high-dimensional genome data with a clinical outcome using a prediction model. Once a prediction model is developed from a data set, it should be validated using a resampling method or an independent data set. Although the existing prediction methods have been intensively evaluated by many investigators, there has not been a comprehensive study investigating the performance of the validation methods, especially with a survival clinical outcome. Understanding the properties of the various validation methods can allow researchers to perform more powerful validations while controlling for type I error. In addition, sample size calculation strategy based on these validation methods is lacking. We conduct extensive simulations to examine the statistical properties of these validation strategies. In both simulations and a real data example, we have found that 10-fold cross-validation with permutation gave the best power while controlling type I error close to the nominal level. Based on this, we have also developed a sample size calculation method that will be used to design a validation study with a user-chosen combination of prediction. Microarray and genome-wide association studies data are used as illustrations. The power calculation method in this presentation can be used for the design of any biomedical studies involving high-dimensional data and survival outcomes.

  13. Ethics and animal numbers: informal analyses, uncertain sample sizes, inefficient replications, and type I errors.

    PubMed

    Fitts, Douglas A

    2011-07-01

    To obtain approval for the use vertebrate animals in research, an investigator must assure an ethics committee that the proposed number of animals is the minimum necessary to achieve a scientific goal. How does an investigator make that assurance? A power analysis is most accurate when the outcome is known before the study, which it rarely is. A 'pilot study' is appropriate only when the number of animals used is a tiny fraction of the numbers that will be invested in the main study because the data for the pilot animals cannot legitimately be used again in the main study without increasing the rate of type I errors (false discovery). Traditional significance testing requires the investigator to determine the final sample size before any data are collected and then to delay analysis of any of the data until all of the data are final. An investigator often learns at that point either that the sample size was larger than necessary or too small to achieve significance. Subjects cannot be added at this point in the study without increasing type I errors. In addition, journal reviewers may require more replications in quantitative studies than are truly necessary. Sequential stopping rules used with traditional significance tests allow incremental accumulation of data on a biomedical research problem so that significance, replicability, and use of a minimal number of animals can be assured without increasing type I errors.

  14. Robust reverse engineering of dynamic gene networks under sample size heterogeneity.

    PubMed

    Parikh, Ankur P; Wu, Wei; Xing, Eric P

    2014-01-01

    Simultaneously reverse engineering a collection of condition-specific gene networks from gene expression microarray data to uncover dynamic mechanisms is a key challenge in systems biology. However, existing methods for this task are very sensitive to variations in the size of the microarray samples across different biological conditions (which we term sample size heterogeneity in network reconstruction), and can potentially produce misleading results that can lead to incorrect biological interpretation. In this work, we develop a more robust framework that addresses this novel problem. Just like microarray measurements across conditions must undergo proper normalization on their magnitudes before entering subsequent analysis, we argue that networks across conditions also need to be "normalized" on their density when they are constructed, and we provide an algorithm that allows such normalization to be facilitated while estimating the networks. We show the quantitative advantages of our approach on synthetic and real data. Our analysis of a hematopoietic stem cell dataset reveals interesting results, some of which are confirmed by previously validated results.

  15. Improving IRT Parameter Estimates with Small Sample Sizes: Evaluating the Efficacy of a New Data Augmentation Technique

    ERIC Educational Resources Information Center

    Foley, Brett Patrick

    2010-01-01

    The 3PL model is a flexible and widely used tool in assessment. However, it suffers from limitations due to its need for large sample sizes. This study introduces and evaluates the efficacy of a new sample size augmentation technique called Duplicate, Erase, and Replace (DupER) Augmentation through a simulation study. Data are augmented using…

  16. What can we learn from studies based on small sample sizes? Comment on Regan, Lakhanpal, and Anguiano (2012).

    PubMed

    Johnson, David R; Bachan, Lauren K

    2013-08-01

    In a recent article, Regan, Lakhanpal, and Anguiano (2012) highlighted the lack of evidence for different relationship outcomes between arranged and love-based marriages. Yet the sample size (n = 58) used in the study is insufficient for making such inferences. This reply discusses and demonstrates how small sample sizes reduce the utility of this research.

  17. Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2016-01-01

    This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…

  18. Review of Sample Size for Structural Equation Models in Second Language Testing and Learning Research: A Monte Carlo Approach

    ERIC Educational Resources Information Center

    In'nami, Yo; Koizumi, Rie

    2013-01-01

    The importance of sample size, although widely discussed in the literature on structural equation modeling (SEM), has not been widely recognized among applied SEM researchers. To narrow this gap, we focus on second language testing and learning studies and examine the following: (a) Is the sample size sufficient in terms of precision and power of…

  19. Estimation of Effect Size Under Nonrandom Sampling: The Effects of Censoring Studies Yielding Statistically Insignificant Mean Differences.

    ERIC Educational Resources Information Center

    Hedges, Larry V.

    1984-01-01

    If the quantitative result of a study is observed only when the mean difference is statistically significant, the observed mean difference, variance, and effect size are biased estimators of corresponding population parameters. The exact distribution of sample effect size and the maximum likelihood estimator of effect size are derived. (Author/BW)

  20. Many-to-one comparison after sample size reestimation for trials with multiple treatment arms and treatment selection.

    PubMed

    Wang, Jixian

    2010-09-01

    Sample size reestimation (SSRE) provides a useful tool to change the sample size when an interim look reveals that the original sample size is inadequate. To control the overall type I error, for testing one hypothesis, several approaches have been proposed to construct a statistic so that its distribution is independent to the SSRE under the null hypothesis. We considered a similar approach for comparisons between multiple treatment arms and placebo, allowing the change of sample sizes in all arms depending on interim information. A construction of statistics similar to that for a single hypothesis test is proposed. When the changes of sample sizes in different arms are proportional, we show that one-step and stepwise Dunnett tests can be used directly on statistics constructed in the proposed way. The approach can also be applied to clinical trials with SSRE and treatment selection at interim. The proposed approach is evaluated with simulations under different situations.

  1. Alpha spectrometric characterization of process-related particle size distributions from active particle sampling at the Los Alamos National Laboratory uranium foundry

    SciTech Connect

    Plionis, Alexander A; Peterson, Dominic S; Tandon, Lav; Lamont, Stephen P

    2009-01-01

    Uranium particles within the respirable size range pose a significant hazard to the health and safety of workers. Significant differences in the deposition and incorporation patterns of aerosols within the respirable range can be identified and integrated into sophisticated health physics models. Data characterizing the uranium particle size distribution resulting from specific foundry-related processes are needed. Using personal air sampling cascade impactors, particles collected from several foundry processes were sorted by activity median aerodynamic diameter onto various Marple substrates. After an initial gravimetric assessment of each impactor stage, the substrates were analyzed by alpha spectrometry to determine the uranium content of each stage. Alpha spectrometry provides rapid nondestructive isotopic data that can distinguish process uranium from natural sources and the degree of uranium contribution to the total accumulated particle load. In addition, the particle size bins utilized by the impactors provide adequate resolution to determine if a process particle size distribution is: lognormal, bimodal, or trimodal. Data on process uranium particle size values and distributions facilitate the development of more sophisticated and accurate models for internal dosimetry, resulting in an improved understanding of foundry worker health and safety.

  2. Stepwise linear discriminant analysis in computer-aided diagnosis: the effect of finite sample size

    NASA Astrophysics Data System (ADS)

    Sahiner, Berkman; Chan, Heang-Ping; Petrick, Nicholas; Wagner, Robert F.; Hadjiiski, Lubomir M.

    1999-05-01

    In computer-aided diagnosis, a frequently-used approach is to first extract several potentially useful features from a data set. Effective features are then selected from this feature space, and a classifier is designed using the selected features. In this study, we investigated the effect of finite sample size on classifier accuracy when classifier design involves feature selection. The feature selection and classifier coefficient estimation stages of classifier design were implemented using stepwise feature selection and Fisher's linear discriminant analysis, respectively. The two classes used in our simulation study were assumed to have multidimensional Gaussian distributions, with a large number of features available for feature selection. We investigated the effect of different covariance matrices and means for the two classes on feature selection performance, and compared two strategies for sample space partitioning for classifier design and testing. Our results indicated that the resubstitution estimate was always optimistically biased, except in cases where too few features were selected by the stepwise procedure. When feature selection was performed using only the design samples, the hold-out estimate was always pessimistically biased. When feature selection was performed using the entire finite sample space, and the data was subsequently partitioned into design and test groups, the hold-out estimates could be pessimistically or optimistically biased, depending on the number of features available for selection, number of available samples, and their statistical distribution. All hold-out estimates exhibited a pessimistic bias when the parameters of the simulation were obtained from texture features extracted from mammograms in a previous study.

  3. Validation of boar taint detection by sensory quality control: relationship between sample size and uncertainty of performance indicators.

    PubMed

    Mörlein, Daniel; Christensen, Rune Haubo Bojesen; Gertheiss, Jan

    2015-02-01

    To prevent impaired consumer acceptance due to insensitive sensory quality control, it is of primary importance to periodically validate the performance of the assessors. This communication show cases how the uncertainty of sensitivity and specificity estimates is influenced by the total number of assessed samples and the prevalence of positive (here: boar tainted) samples. Furthermore, a statistically sound approach to determining the sample size that is necessary for performance validation is provided. Results show that a small sample size is associated with large uncertainty, i.e., confidence intervals and thus compromising the point estimates for assessor sensitivity. In turn, to reliably identify sensitive assessors with sufficient test power, a large sample size is needed given a certain level of confidence. Easy-to-use tables for sample size estimations are provided. PMID:25460131

  4. Size exclusion chromatography for analyses of fibroin in silk: optimization of sampling and separation conditions

    NASA Astrophysics Data System (ADS)

    Pawcenis, Dominika; Koperska, Monika A.; Milczarek, Jakub M.; Łojewski, Tomasz; Łojewska, Joanna

    2014-02-01

    A direct goal of this paper was to improve the methods of sample preparation and separation for analyses of fibroin polypeptide with the use of size exclusion chromatography (SEC). The motivation for the study arises from our interest in natural polymers included in historic textile and paper artifacts, and is a logical response to the urgent need for developing rationale-based methods for materials conservation. The first step is to develop a reliable analytical tool which would give insight into fibroin structure and its changes caused by both natural and artificial ageing. To investigate the influence of preparation conditions, two sets of artificially aged samples were prepared (with and without NaCl in sample solution) and measured by the means of SEC with multi angle laser light scattering detector. It was shown that dialysis of fibroin dissolved in LiBr solution allows removal of the salt which destroys stacks chromatographic columns and prevents reproducible analyses. Salt rich (NaCl) water solutions of fibroin improved the quality of chromatograms.

  5. Evaluation of a digestion assay and determination of sample size and tissue for the reliable detection of Trichinella larvae in walrus meat.

    PubMed

    Leclair, Daniel; Forbes, Lorry B; Suppa, Sandy; Gajadhar, Alvin A

    2003-03-01

    A digestion assay was validated for the detection of Trichinella larvae in walrus (Odobenus rosmarus) meat, and appropriate samples for testing were determined using tissues from infected walruses harvested for food. Examination of muscles from 3 walruses showed that the tongue consistently contained approximately 2-6 times more larvae than the pectoral and intercostal muscles. Comparison of numbers of larvae in the root, body, and apex of the tongue from 3 walruses failed to identify a predilection site within the tongue, but the apex was considered an optimal tissue because of the high larval density within the tongue and the ease of collection. All 31 spiked samples weighing 50 g each and containing between 0.1 and 0.4 larvae per gram (lpg) were correctly identified as infected, indicating that the sensitivity of this procedure is adequate for diagnostic use. A sample size of 10 g consistently detected larvae in 2 walrus tongues containing > or = 0.3 lpg (n = 40), and until additional data are available, sample sizes from individual walrus tongues should be a minimum of 10 g. This study provides the preliminary data that were used for the development of a food safety analytical protocol for the detection of Trichinella in walrus meat in arctic communities.

  6. What about N? A methodological study of sample-size reporting in focus group studies

    PubMed Central

    2011-01-01

    Background Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out. Methods We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed. Results We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96). Thirty seven (17%) studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers. Conclusions Based on these findings we suggest that journals adopt more stringent requirements for focus group method reporting. The often poor and

  7. Analyzing insulin samples by size-exclusion chromatography: a column degradation study.

    PubMed

    Teska, Brandon M; Kumar, Amit; Carpenter, John F; Wempe, Michael F

    2015-04-01

    Investigating insulin analogs and probing their intrinsic stability at physiological temperature, we observed significant degradation in the size-exclusion chromatography (SEC) signal over a moderate number of insulin sample injections, which generated concerns about the quality of the separations. Therefore, our research goal was to identify the cause(s) for the observed signal degradation and attempt to mitigate the degradation in order to extend SEC column lifespan. In these studies, we used multiangle light scattering, nuclear magnetic resonance, and gas chromatography-mass spectrometry methods to evaluate column degradation. The results from these studies illustrate: (1) that zinc ions introduced by the insulin product produced the observed column performance issues; and (2) that including ethylenediaminetetraacetic acid, a zinc chelator, in the mobile phase helped to maintain column performance.

  8. Analysis, sample size, and power for estimating incremental net health benefit from clinical trial data.

    PubMed

    Willan, A R

    2001-06-01

    Stinnett and Mullahy recently introduced the concept of net health benefit as an alternative to cost-effectiveness ratios for the statistical analysis of patient-level data on the costs and health effects of competing interventions. Net health benefit addresses a number of problems associated with cost-effectiveness ratios by assuming a value for the willingness-to-pay for a unit of effectiveness. We extend the concept of net health benefit to demonstrate that standard statistical procedures can be used for the analysis, power, and sample size determinations of cost-effectiveness data. We also show that by varying the value of the willingness-to-pay, the point estimate and confidence interval for the incremental cost-effectiveness ratio can be determined. An example is provided.

  9. A bootstrap test for comparing two variances: simulation of size and power in small samples.

    PubMed

    Sun, Jiajing; Chernick, Michael R; LaBudde, Robert A

    2011-11-01

    An F statistic was proposed by Good and Chernick ( 1993 ) in an unpublished paper, to test the hypothesis of the equality of variances from two independent groups using the bootstrap; see Hall and Padmanabhan ( 1997 ), for a published reference where Good and Chernick ( 1993 ) is discussed. We look at various forms of bootstrap tests that use the F statistic to see whether any or all of them maintain the nominal size of the test over a variety of population distributions when the sample size is small. Chernick and LaBudde ( 2010 ) and Schenker ( 1985 ) showed that bootstrap confidence intervals for variances tend to provide considerably less coverage than their theoretical asymptotic coverage for skewed population distributions such as a chi-squared with 10 degrees of freedom or less or a log-normal distribution. The same difficulties may be also be expected when looking at the ratio of two variances. Since bootstrap tests are related to constructing confidence intervals for the ratio of variances, we simulated the performance of these tests when the population distributions are gamma(2,3), uniform(0,1), Student's t distribution with 10 degrees of freedom (df), normal(0,1), and log-normal(0,1) similar to those used in Chernick and LaBudde ( 2010 ). We find, surprisingly, that the results for the size of the tests are valid (reasonably close to the asymptotic value) for all the various bootstrap tests. Hence we also conducted a power comparison, and we find that bootstrap tests appear to have reasonable power for testing equivalence of variances.

  10. Multicategory nets of single-layer perceptrons: complexity and sample-size issues.

    PubMed

    Raudys, Sarunas; Kybartas, Rimantas; Zavadskas, Edmundas Kazimieras

    2010-05-01

    The standard cost function of multicategory single-layer perceptrons (SLPs) does not minimize the classification error rate. In order to reduce classification error, it is necessary to: 1) refuse the traditional cost function, 2) obtain near to optimal pairwise linear classifiers by specially organized SLP training and optimal stopping, and 3) fuse their decisions properly. To obtain better classification in unbalanced training set situations, we introduce the unbalance correcting term. It was found that fusion based on the Kulback-Leibler (K-L) distance and the Wu-Lin-Weng (WLW) method result in approximately the same performance in situations where sample sizes are relatively small. The explanation for this observation is by theoretically known verity that an excessive minimization of inexact criteria becomes harmful at times. Comprehensive comparative investigations of six real-world pattern recognition (PR) problems demonstrated that employment of SLP-based pairwise classifiers is comparable and as often as not outperforming the linear support vector (SV) classifiers in moderate dimensional situations. The colored noise injection used to design pseudovalidation sets proves to be a powerful tool for facilitating finite sample problems in moderate-dimensional PR tasks.

  11. Weighted piecewise LDA for solving the small sample size problem in face verification.

    PubMed

    Kyperountas, Marios; Tefas, Anastasios; Pitas, Ioannis

    2007-03-01

    A novel algorithm that can be used to boost the performance of face-verification methods that utilize Fisher's criterion is presented and evaluated. The algorithm is applied to similarity, or matching error, data and provides a general solution for overcoming the "small sample size" (SSS) problem, where the lack of sufficient training samples causes improper estimation of a linear separation hyperplane between the classes. Two independent phases constitute the proposed method. Initially, a set of weighted piecewise discriminant hyperplanes are used in order to provide a more accurate discriminant decision than the one produced by the traditional linear discriminant analysis (LDA) methodology. The expected classification ability of this method is investigated throughout a series of simulations. The second phase defines proper combinations for person-specific similarity scores and describes an outlier removal process that further enhances the classification ability. The proposed technique has been tested on the M2VTS and XM2VTS frontal face databases. Experimental results indicate that the proposed framework greatly improves the face-verification performance.

  12. 34 CFR 85.900 - Adequate evidence.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false Adequate evidence. 85.900 Section 85.900 Education Office of the Secretary, Department of Education GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) Definitions § 85.900 Adequate evidence. Adequate evidence means information sufficient to support...

  13. 12 CFR 380.52 - Adequate protection.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 5 2012-01-01 2012-01-01 false Adequate protection. 380.52 Section 380.52... ORDERLY LIQUIDATION AUTHORITY Receivership Administrative Claims Process § 380.52 Adequate protection. (a... interest of a claimant, the receiver shall provide adequate protection by any of the following means:...

  14. 12 CFR 380.52 - Adequate protection.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 5 2013-01-01 2013-01-01 false Adequate protection. 380.52 Section 380.52... ORDERLY LIQUIDATION AUTHORITY Receivership Administrative Claims Process § 380.52 Adequate protection. (a... interest of a claimant, the receiver shall provide adequate protection by any of the following means:...

  15. 12 CFR 380.52 - Adequate protection.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 5 2014-01-01 2014-01-01 false Adequate protection. 380.52 Section 380.52... ORDERLY LIQUIDATION AUTHORITY Receivership Administrative Claims Process § 380.52 Adequate protection. (a... interest of a claimant, the receiver shall provide adequate protection by any of the following means:...

  16. 21 CFR 1404.900 - Adequate evidence.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Adequate evidence. 1404.900 Section 1404.900 Food and Drugs OFFICE OF NATIONAL DRUG CONTROL POLICY GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) Definitions § 1404.900 Adequate evidence. Adequate evidence means information sufficient...

  17. A comparison of single-sample estimators of effective population sizes from genetic marker data.

    PubMed

    Wang, Jinliang

    2016-10-01

    In molecular ecology and conservation genetics studies, the important parameter of effective population size (Ne ) is increasingly estimated from a single sample of individuals taken at random from a population and genotyped at a number of marker loci. Several estimators are developed, based on the information of linkage disequilibrium (LD), heterozygote excess (HE), molecular coancestry (MC) and sibship frequency (SF) in marker data. The most popular is the LD estimator, because it is more accurate than HE and MC estimators and is simpler to calculate than SF estimator. However, little is known about the accuracy of LD estimator relative to that of SF and about the robustness of all single-sample estimators when some simplifying assumptions (e.g. random mating, no linkage, no genotyping errors) are violated. This study fills the gaps and uses extensive simulations to compare the biases and accuracies of the four estimators for different population properties (e.g. bottlenecks, nonrandom mating, haplodiploid), marker properties (e.g. linkage, polymorphisms) and sample properties (e.g. numbers of individuals and markers) and to compare the robustness of the four estimators when marker data are imperfect (with allelic dropouts). Extensive simulations show that SF estimator is more accurate, has a much wider application scope (e.g. suitable to nonrandom mating such as selfing, haplodiploid species, dominant markers) and is more robust (e.g. to the presence of linkage and genotyping errors of markers) than the other estimators. An empirical data set from a Yellowstone grizzly bear population was analysed to demonstrate the use of the SF estimator in practice.

  18. A comparison of single-sample estimators of effective population sizes from genetic marker data.

    PubMed

    Wang, Jinliang

    2016-10-01

    In molecular ecology and conservation genetics studies, the important parameter of effective population size (Ne ) is increasingly estimated from a single sample of individuals taken at random from a population and genotyped at a number of marker loci. Several estimators are developed, based on the information of linkage disequilibrium (LD), heterozygote excess (HE), molecular coancestry (MC) and sibship frequency (SF) in marker data. The most popular is the LD estimator, because it is more accurate than HE and MC estimators and is simpler to calculate than SF estimator. However, little is known about the accuracy of LD estimator relative to that of SF and about the robustness of all single-sample estimators when some simplifying assumptions (e.g. random mating, no linkage, no genotyping errors) are violated. This study fills the gaps and uses extensive simulations to compare the biases and accuracies of the four estimators for different population properties (e.g. bottlenecks, nonrandom mating, haplodiploid), marker properties (e.g. linkage, polymorphisms) and sample properties (e.g. numbers of individuals and markers) and to compare the robustness of the four estimators when marker data are imperfect (with allelic dropouts). Extensive simulations show that SF estimator is more accurate, has a much wider application scope (e.g. suitable to nonrandom mating such as selfing, haplodiploid species, dominant markers) and is more robust (e.g. to the presence of linkage and genotyping errors of markers) than the other estimators. An empirical data set from a Yellowstone grizzly bear population was analysed to demonstrate the use of the SF estimator in practice. PMID:27288989

  19. Large sample area and size are needed for forest soil seed bank studies to ensure low discrepancy with standing vegetation.

    PubMed

    Shen, You-xin; Liu, Wei-li; Li, Yu-hui; Guan, Hui-lin

    2014-01-01

    A large number of small-sized samples invariably shows that woody species are absent from forest soil seed banks, leading to a large discrepancy with the seedling bank on the forest floor. We ask: 1) Does this conventional sampling strategy limit the detection of seeds of woody species? 2) Are large sample areas and sample sizes needed for higher recovery of seeds of woody species? We collected 100 samples that were 10 cm (length) × 10 cm (width) × 10 cm (depth), referred to as larger number of small-sized samples (LNSS) in a 1 ha forest plot, and placed them to germinate in a greenhouse, and collected 30 samples that were 1 m × 1 m × 10 cm, referred to as small number of large-sized samples (SNLS) and placed them (10 each) in a nearby secondary forest, shrub land and grass land. Only 15.7% of woody plant species of the forest stand were detected by the 100 LNSS, contrasting with 22.9%, 37.3% and 20.5% woody plant species being detected by SNLS in the secondary forest, shrub land and grassland, respectively. The increased number of species vs. sampled areas confirmed power-law relationships for forest stand, the LNSS and SNLS at all three recipient sites. Our results, although based on one forest, indicate that conventional LNSS did not yield a high percentage of detection for woody species, but SNLS strategy yielded a higher percentage of detection for woody species in the seed bank if samples were exposed to a better field germination environment. A 4 m2 minimum sample area derived from power equations is larger than the sampled area in most studies in the literature. Increased sample size also is needed to obtain an increased sample area if the number of samples is to remain relatively low. PMID:25140738

  20. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    PubMed

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  1. Does increasing the size of bi-weekly samples of records influence results when using the Global Trigger Tool? An observational study of retrospective record reviews of two different sample sizes

    PubMed Central

    Mevik, Kjersti; Griffin, Frances A; Hansen, Tonje E; Deilkås, Ellen T; Vonen, Barthold

    2016-01-01

    Objectives To investigate the impact of increasing sample of records reviewed bi-weekly with the Global Trigger Tool method to identify adverse events in hospitalised patients. Design Retrospective observational study. Setting A Norwegian 524-bed general hospital trust. Participants 1920 medical records selected from 1 January to 31 December 2010. Primary outcomes Rate, type and severity of adverse events identified in two different samples sizes of records selected as 10 and 70 records, bi-weekly. Results In the large sample, 1.45 (95% CI 1.07 to 1.97) times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) were identified than in the small sample (27.2 adverse events/1000 patient days). Hospital-acquired infections were the most common category of adverse events in both the samples, and the distributions of the other categories of adverse events did not differ significantly between the samples. The distribution of severity level of adverse events did not differ between the samples. Conclusions The findings suggest that while the distribution of categories and severity are not dependent on the sample size, the rate of adverse events is. Further studies are needed to conclude if the optimal sample size may need to be adjusted based on the hospital size in order to detect a more accurate rate of adverse events. PMID:27113238

  2. The Effect of Small Sample Size on Two-Level Model Estimates: A Review and Illustration

    ERIC Educational Resources Information Center

    McNeish, Daniel M.; Stapleton, Laura M.

    2016-01-01

    Multilevel models are an increasingly popular method to analyze data that originate from a clustered or hierarchical structure. To effectively utilize multilevel models, one must have an adequately large number of clusters; otherwise, some model parameters will be estimated with bias. The goals for this paper are to (1) raise awareness of the…

  3. Processing Variables of Alumina Slips and Their Effects on the Density and Grain Size of the Sintered Sample

    SciTech Connect

    Rowley, R.; Chu, H.

    2002-01-01

    High densities and small grain size of alumina ceramic bodies provide high strength and better mechanical properties than lower density and larger grain size bodies. The final sintered density and grain size of slip-cast, alumina samples depends greatly on the processing of the slip and the alumina powder, as well as the sintering schedule. There were many different variables explored that include initial powder particle size, slurry solids percent, amount and type of dispersant used, amount and type of binder used, and sintering schedule. Although the experimentation is not complete, to this point the sample with the highest density and smallest grain size has been a SM8/Nano mixture with Darvan C as the dispersant and Polyvinyl Alcohol (PVA) as the binder, with a solids loading of 70 wt% and a 1500 C for 2 hours sintering schedule. The resultant density was 98.81% of theoretical and the average grain size was approximately 2.5 {micro}m.

  4. Statistical grand rounds: a review of analysis and sample size calculation considerations for Wilcoxon tests.

    PubMed

    Divine, George; Norton, H James; Hunt, Ronald; Dienemann, Jacqueline

    2013-09-01

    When a study uses an ordinal outcome measure with unknown differences in the anchors and a small range such as 4 or 7, use of the Wilcoxon rank sum test or the Wilcoxon signed rank test may be most appropriate. However, because nonparametric methods are at best indirect functions of standard measures of location such as means or medians, the choice of the most appropriate summary measure can be difficult. The issues underlying use of these tests are discussed. The Wilcoxon-Mann-Whitney odds directly reflects the quantity that the rank sum procedure actually tests, and thus it can be a superior summary measure. Unlike the means and medians, its value will have a one-to-one correspondence with the Wilcoxon rank sum test result. The companion article appearing in this issue of Anesthesia & Analgesia ("Aromatherapy as Treatment for Postoperative Nausea: A Randomized Trial") illustrates these issues and provides an example of a situation for which the medians imply no difference between 2 groups, even though the groups are, in fact, quite different. The trial cited also provides an example of a single sample that has a median of zero, yet there is a substantial shift for much of the nonzero data, and the Wilcoxon signed rank test is quite significant. These examples highlight the potential discordance between medians and Wilcoxon test results. Along with the issues surrounding the choice of a summary measure, there are considerations for the computation of sample size and power, confidence intervals, and multiple comparison adjustment. In addition, despite the increased robustness of the Wilcoxon procedures relative to parametric tests, some circumstances in which the Wilcoxon tests may perform poorly are noted, along with alternative versions of the procedures that correct for such limitations.  PMID:23456667

  5. Statistical issues including design and sample size calculation in thorough QT/QTc studies.

    PubMed

    Zhang, Joanne; Machado, Stella G

    2008-01-01

    After several drugs were removed from the market in recent years because of death due to ventricular tachycardia resulting from drug-induced QT prolongation (Khongphatthanayothin et al., 1998; Lasser et al., 2002; Pratt et al., 1994; Wysowski et al., 2001), the ICH Regulatory agencies requested all sponsors of new drugs to conduct a clinical study, named a Thorough QT/QTc (TQT) study, to assess any possible QT prolongation due to the study drug. The final version of the ICH E14 guidance (ICH, 2005) for "The Clinical Evaluation of QT/QTc Interval Prolongation and Proarrhythmic Potential for Nonantiarrhythmic Drugs" was released in May 2005. The purpose of the ICH E14 guidance (ICH, 2005) is to provide recommendations to sponsors concerning the design, conduct, analysis, and interpretation of clinical studies to assess the potential of a drug to delay cardiac repolarization. The guideline, however, is not specific on several issues. In this paper, we try to address some statistical issues, including study design, primary statistical analysis, assay sensitivity analysis, and the calculation of the sample size for a TQT study.

  6. Inference and sample size calculation in the fit assessment of filtering facepiece respirators.

    PubMed

    Zhang, Zhiwei; Kotz, Richard M

    2008-01-01

    Filtering facepiece respirators have recently been cleared by the U.S. Food and Drug Administration (FDA) for use by the general public in public health medical emergencies such as pandemic influenza. In the fit assessment of these devices it is important to distinguish between the two sources of variability: population heterogeneity and random fluctuations over repeated donnings. The FDA Special Controls Guidance Document (SCGD) which describes these devices and their evaluation, recommends that the fit performance of a filtering facepiece respirator be evaluated in terms of the proportion of users who will receive a specified level of protection 95% of the time. A point estimator of this proportion is easily obtained under an analysis of variance model, and the SCGD suggests bootstrap as one possible approach to interval estimation. This paper describes a closed-form procedure to obtain confidence intervals and provides sample size formulas. Simulation results suggest that the proposed procedure performs well in realistic settings and compares favorably to two simple bootstrap procedures. PMID:18607803

  7. Data with Hierarchical Structure: Impact of Intraclass Correlation and Sample Size on Type-I Error

    PubMed Central

    Musca, Serban C.; Kamiejski, Rodolphe; Nugier, Armelle; Méot, Alain; Er-Rafiy, Abdelatif; Brauer, Markus

    2011-01-01

    Least squares analyses (e.g., ANOVAs, linear regressions) of hierarchical data leads to Type-I error rates that depart severely from the nominal Type-I error rate assumed. Thus, when least squares methods are used to analyze hierarchical data coming from designs in which some groups are assigned to the treatment condition, and others to the control condition (i.e., the widely used “groups nested under treatment” experimental design), the Type-I error rate is seriously inflated, leading too often to the incorrect rejection of the null hypothesis (i.e., the incorrect conclusion of an effect of the treatment). To highlight the severity of the problem, we present simulations showing how the Type-I error rate is affected under different conditions of intraclass correlation and sample size. For all simulations the Type-I error rate after application of the popular Kish (1965) correction is also considered, and the limitations of this correction technique discussed. We conclude with suggestions on how one should collect and analyze data bearing a hierarchical structure. PMID:21687445

  8. Sample size and sampling errors as the source of dispersion in chemical analyses. [for high-Ti lunar basalt

    NASA Technical Reports Server (NTRS)

    Clanton, U. S.; Fletcher, C. R.

    1976-01-01

    The paper describes a Monte Carlo model for simulation of two-dimensional representations of thin sections of some of the more common igneous rock textures. These representations are extrapolated to three dimensions to develop a volume of 'rock'. The model (here applied to a medium-grained high-Ti basalt) can be used to determine a statistically significant sample for a lunar rock or to predict the probable errors in the oxide contents that can occur during the analysis of a sample that is not representative of the parent rock.

  9. Sampling hazelnuts for aflatoxin: Effects of sample size and accetp/reject limit on reducing risk of misclassifying lots

    Technology Transfer Automated Retrieval System (TEKTRAN)

    About 100 countries have established regulatory limits for aflatoxin in food and feeds. Because these limits vary widely among regulating countries, the Codex Committee on Food Additives and Contaminants (CCFAC) began work in 2004 to harmonize aflatoxin limits and sampling plans for aflatoxin in alm...

  10. Sample Size Planning for the Squared Multiple Correlation Coefficient: Accuracy in Parameter Estimation via Narrow Confidence Intervals

    ERIC Educational Resources Information Center

    Kelley, Ken

    2008-01-01

    Methods of sample size planning are developed from the accuracy in parameter approach in the multiple regression context in order to obtain a sufficiently narrow confidence interval for the population squared multiple correlation coefficient when regressors are random. Approximate and exact methods are developed that provide necessary sample size…

  11. Sample size estimation to substantiate freedom from disease for clustered binary data with a specific risk profile.

    PubMed

    Kostoulas, P; Nielsen, S S; Browne, W J; Leontides, L

    2013-06-01

    Disease cases are often clustered within herds or generally groups that share common characteristics. Sample size formulae must adjust for the within-cluster correlation of the primary sampling units. Traditionally, the intra-cluster correlation coefficient (ICC), which is an average measure of the data heterogeneity, has been used to modify formulae for individual sample size estimation. However, subgroups of animals sharing common characteristics, may exhibit excessively less or more heterogeneity. Hence, sample size estimates based on the ICC may not achieve the desired precision and power when applied to these groups. We propose the use of the variance partition coefficient (VPC), which measures the clustering of infection/disease for individuals with a common risk profile. Sample size estimates are obtained separately for those groups that exhibit markedly different heterogeneity, thus, optimizing resource allocation. A VPC-based predictive simulation method for sample size estimation to substantiate freedom from disease is presented. To illustrate the benefits of the proposed approach we give two examples with the analysis of data from a risk factor study on Mycobacterium avium subsp. paratuberculosis infection, in Danish dairy cattle and a study on critical control points for Salmonella cross-contamination of pork, in Greek slaughterhouses.

  12. Size Matters: Assessing Optimum Soil Sample Size for Fungal and Bacterial Community Structure Analyses Using High Throughput Sequencing of rRNA Gene Amplicons

    PubMed Central

    Penton, C. Ryan; Gupta, Vadakattu V. S. R.; Yu, Julian; Tiedje, James M.

    2016-01-01

    We examined the effect of different soil sample sizes obtained from an agricultural field, under a single cropping system uniform in soil properties and aboveground crop responses, on bacterial and fungal community structure and microbial diversity indices. DNA extracted from soil sample sizes of 0.25, 1, 5, and 10 g using MoBIO kits and from 10 and 100 g sizes using a bead-beating method (SARDI) were used as templates for high-throughput sequencing of 16S and 28S rRNA gene amplicons for bacteria and fungi, respectively, on the Illumina MiSeq and Roche 454 platforms. Sample size significantly affected overall bacterial and fungal community structure, replicate dispersion and the number of operational taxonomic units (OTUs) retrieved. Richness, evenness and diversity were also significantly affected. The largest diversity estimates were always associated with the 10 g MoBIO extractions with a corresponding reduction in replicate dispersion. For the fungal data, smaller MoBIO extractions identified more unclassified Eukaryota incertae sedis and unclassified glomeromycota while the SARDI method retrieved more abundant OTUs containing unclassified Pleosporales and the fungal genera Alternaria and Cercophora. Overall, these findings indicate that a 10 g soil DNA extraction is most suitable for both soil bacterial and fungal communities for retrieving optimal diversity while still capturing rarer taxa in concert with decreasing replicate variation. PMID:27313569

  13. Obtained effect size as a function of sample size in approved antidepressants: a real-world illustration in support of better trial design.

    PubMed

    Gibertini, Michael; Nations, Kari R; Whitaker, John A

    2012-03-01

    The high failure rate of antidepressant trials has spurred exploration of the factors that affect trial sensitivity. In the current analysis, Food and Drug Administration antidepressant drug registration trial data compiled by Turner et al. is extended to include the most recently approved antidepressants. The expanded dataset is examined to further establish the likely population effect size (ES) for monoaminergic antidepressants and to demonstrate the relationship between observed ES and sample size in trials on compounds with proven efficacy. Results indicate that the overall underlying ES for antidepressants is approximately 0.30, and that the variability in observed ES across trials is related to the sample size of the trial. The current data provide a unique real-world illustration of an often underappreciated statistical truism: that small N trials are more likely to mislead than to inform, and that by aligning sample size to the population ES, risks of both erroneously high and low effects are minimized. The results in the current study make this abstract concept concrete and will help drug developers arrive at informed gate decisions with greater confidence and fewer risks, improving the odds of success for future antidepressant trials.

  14. Size Matters: Assessing Optimum Soil Sample Size for Fungal and Bacterial Community Structure Analyses Using High Throughput Sequencing of rRNA Gene Amplicons

    DOE PAGES

    Penton, C. Ryan; Gupta, Vadakattu V. S. R.; Yu, Julian; Tiedje, James M.

    2016-06-02

    We examined the effect of different soil sample sizes obtained from an agricultural field, under a single cropping system uniform in soil properties and aboveground crop responses, on bacterial and fungal community structure and microbial diversity indices. DNA extracted from soil sample sizes of 0.25, 1, 5, and 10 g using MoBIO kits and from 10 and 100 g sizes using a bead-beating method (SARDI) were used as templates for high-throughput sequencing of 16S and 28S rRNA gene amplicons for bacteria and fungi, respectively, on the Illumina MiSeq and Roche 454 platforms. Sample size significantly affected overall bacterial and fungalmore » community structure, replicate dispersion and the number of operational taxonomic units (OTUs) retrieved. Richness, evenness and diversity were also significantly affected. The largest diversity estimates were always associated with the 10 g MoBIO extractions with a corresponding reduction in replicate dispersion. For the fungal data, smaller MoBIO extractions identified more unclassified Eukaryota incertae sedis and unclassified glomeromycota while the SARDI method retrieved more abundant OTUs containing unclassified Pleosporales and the fungal genera Alternaria and Cercophora. Overall, these findings indicate that a 10 g soil DNA extraction is most suitable for both soil bacterial and fungal communities for retrieving optimal diversity while still capturing rarer taxa in concert with decreasing replicate variation.« less

  15. [Tobacco smoking in a sample of middle-size city inhabitants aged 35-55].

    PubMed

    Maniecka-Bryła, Irena; Maciak, Aleksandra; Kowalska, Alina; Bryła, Marek

    2008-01-01

    Tobacco smoking constitutes a common risk factor for the majority of civilization diseases, such as cardiovascular system diseases, malignant neoplasms and digestion and respiratory system disorders as well. Tobacco-related disorders relate to exacerbation of chronic diseases, for example diabetes and multiple sclerosis. Poland is one of those countries, where the prevalence of smoking is especially widespread. In Poland 42% of men and 25% of women smoke cigarettes and the amount of addicted people amounts to approximately 10 million. The latest data from the year 2003 show that the amount of cigarettes smoked by a particular citizen in Poland has risen fourfold since the beginning of 21st century. This paper presents an analysis of prevalence of tobacco smoking among inhabitants of a middle-size city in the Lodz province aged 35-55 years. The study sample comprised 124 people, including 75 females and 49 males. The tool of the research was a questionnaire survey containing questions concerning cigarette smoking. The study found out that 39.5% of respondents (41.3% of females and 36.7% of males) smoked cigarettes. The percentage of former smokers amounted to 15.3% and the percentage of non-smokers was higher than regular smokers and amounted to 44.8%. The study results showed that the majority of smokers were in the age interval of 45 to 49. Cigarette smoking influenced on smokers' health. The blood pressure and lipid balance was higher among smokers than among people who did not smoke cigarettes. The results of the conducted study confirm that there is a strong need of implementation of programmes towards limiting tobacco smoking, which may contribute to lowering the risk of tobacco-related diseases. PMID:19189562

  16. Reliable calculation in probabilistic logic: Accounting for small sample size and model uncertainty

    SciTech Connect

    Ferson, S.

    1996-12-31

    A variety of practical computational problems arise in risk and safety assessments, forensic statistics and decision analyses in which the probability of some event or proposition E is to be estimated from the probabilities of a finite list of related subevents or propositions F,G,H,.... In practice, the analyst`s knowledge may be incomplete in two ways. First, the probabilities of the subevents may be imprecisely known from statistical estimations, perhaps based on very small sample sizes. Second, relationships among the subevents may be known imprecisely. For instance, there may be only limited information about their stochastic dependencies. Representing probability estimates as interval ranges on has been suggested as a way to address the first source of imprecision. A suite of AND, OR and NOT operators defined with reference to the classical Frochet inequalities permit these probability intervals to be used in calculations that address the second source of imprecision, in many cases, in a best possible way. Using statistical confidence intervals as inputs unravels the closure properties of this approach however, requiring that probability estimates be characterized by a nested stack of intervals for all possible levels of statistical confidence, from a point estimate (0% confidence) to the entire unit interval (100% confidence). The corresponding logical operations implied by convolutive application of the logical operators for every possible pair of confidence intervals reduces by symmetry to a manageably simple level-wise iteration. The resulting calculus can be implemented in software that allows users to compute comprehensive and often level-wise best possible bounds on probabilities for logical functions of events.

  17. Applying Individual Tree Structure From Lidar to Address the Sensitivity of Allometric Equations to Small Sample Sizes.

    NASA Astrophysics Data System (ADS)

    Duncanson, L.; Dubayah, R.

    2015-12-01

    Lidar remote sensing is widely applied for mapping forest carbon stocks, and technological advances have improved our ability to capture structural details from forests, even resolving individual trees. Despite these advancements, the accuracy of forest aboveground biomass models remains limited by the quality of field estimates of biomass. The accuracies of field estimates are inherently dependent on the accuracy of the allometric equations used to relate measurable attributes to biomass. These equations are calibrated with relatively small samples of often spatially clustered trees. This research focuses on one of many issues involving allometric equations - understanding how sensitive allometric parameters are to the sample sizes used to fit them. We capitalize on recent advances in lidar remote sensing to extract individual tree structural information from six high-resolution airborne lidar datasets in the United States. We remotely measure millions of tree heights and crown radii, and fit allometric equations to the relationship between tree height and radius at a 'population' level, in each site. We then extract samples from our tree database, and build allometries on these smaller samples of trees, with varying sample sizes. We show that for the allometric relationship between tree height and crown radius, small sample sizes produce biased allometric equations that overestimate height for a given crown radius. We extend this analysis using translations from the literature to address potential implications for biomass, showing that site-level biomass may be greatly overestimated when applying allometric equations developed with the typically small sample sizes used in popular allometric equations for biomass.

  18. Rule-of-thumb adjustment of sample sizes to accommodate dropouts in a two-stage analysis of repeated measurements.

    PubMed

    Overall, John E; Tonidandel, Scott; Starbuck, Robert R

    2006-01-01

    Recent contributions to the statistical literature have provided elegant model-based solutions to the problem of estimating sample sizes for testing the significance of differences in mean rates of change across repeated measures in controlled longitudinal studies with differentially correlated error and missing data due to dropouts. However, the mathematical complexity and model specificity of these solutions make them generally inaccessible to most applied researchers who actually design and undertake treatment evaluation research in psychiatry. In contrast, this article relies on a simple two-stage analysis in which dropout-weighted slope coefficients fitted to the available repeated measurements for each subject separately serve as the dependent variable for a familiar ANCOVA test of significance for differences in mean rates of change. This article is about how a sample of size that is estimated or calculated to provide desired power for testing that hypothesis without considering dropouts can be adjusted appropriately to take dropouts into account. Empirical results support the conclusion that, whatever reasonable level of power would be provided by a given sample size in the absence of dropouts, essentially the same power can be realized in the presence of dropouts simply by adding to the original dropout-free sample size the number of subjects who would be expected to drop from a sample of that original size under conditions of the proposed study. PMID:16676681

  19. Automated Gel Size Selection to Improve the Quality of Next-generation Sequencing Libraries Prepared from Environmental Water Samples.

    PubMed

    Uyaguari-Diaz, Miguel I; Slobodan, Jared R; Nesbitt, Matthew J; Croxen, Matthew A; Isaac-Renton, Judith; Prystajecky, Natalie A; Tang, Patrick

    2015-04-17

    Next-generation sequencing of environmental samples can be challenging because of the variable DNA quantity and quality in these samples. High quality DNA libraries are needed for optimal results from next-generation sequencing. Environmental samples such as water may have low quality and quantities of DNA as well as contaminants that co-precipitate with DNA. The mechanical and enzymatic processes involved in extraction and library preparation may further damage the DNA. Gel size selection enables purification and recovery of DNA fragments of a defined size for sequencing applications. Nevertheless, this task is one of the most time-consuming steps in the DNA library preparation workflow. The protocol described here enables complete automation of agarose gel loading, electrophoretic analysis, and recovery of targeted DNA fragments. In this study, we describe a high-throughput approach to prepare high quality DNA libraries from freshwater samples that can be applied also to other environmental samples. We used an indirect approach to concentrate bacterial cells from environmental freshwater samples; DNA was extracted using a commercially available DNA extraction kit, and DNA libraries were prepared using a commercial transposon-based protocol. DNA fragments of 500 to 800 bp were gel size selected using Ranger Technology, an automated electrophoresis workstation. Sequencing of the size-selected DNA libraries demonstrated significant improvements to read length and quality of the sequencing reads.

  20. Effect of sample area and sieve size on benthic macrofaunal community condition assessments in California enclosed bays and estuaries.

    PubMed

    Hammerstrom, Kamille K; Ranasinghe, J Ananda; Weisberg, Stephen B; Oliver, John S; Fairey, W Russell; Slattery, Peter N; Oakden, James M

    2012-10-01

    Benthic macrofauna are used extensively for environmental assessment, but the area sampled and sieve sizes used to capture animals often differ among studies. Here, we sampled 80 sites using 3 different sized sampling areas (0.1, 0.05, 0.0071 m(2)) and sieved those sediments through each of 2 screen sizes (0.5, 1 mm) to evaluate their effect on number of individuals, number of species, dominance, nonmetric multidimensional scaling (MDS) ordination, and benthic community condition indices that are used to assess sediment quality in California. Sample area had little effect on abundance but substantially affected numbers of species, which are not easily scaled to a standard area. Sieve size had a substantial effect on both measures, with the 1-mm screen capturing only 74% of the species and 68% of the individuals collected in the 0.5-mm screen. These differences, though, had little effect on the ability to differentiate samples along gradients in ordination space. Benthic indices generally ranked sample condition in the same order regardless of gear, although the absolute scoring of condition was affected by gear type. The largest differences in condition assessment were observed for the 0.0071-m(2) gear. Benthic indices based on numbers of species were more affected than those based on relative abundance, primarily because we were unable to scale species number to a common area as we did for abundance. PMID:20938972

  1. Sampling of illicit drugs for quantitative analysis--part II. Study of particle size and its influence on mass reduction.

    PubMed

    Bovens, M; Csesztregi, T; Franc, A; Nagy, J; Dujourdy, L

    2014-01-01

    The basic goal in sampling for the quantitative analysis of illicit drugs is to maintain the average concentration of the drug in the material from its original seized state (the primary sample) all the way through to the analytical sample, where the effect of particle size is most critical. The size of the largest particles of different authentic illicit drug materials, in their original state and after homogenisation, using manual or mechanical procedures, was measured using a microscope with a camera attachment. The comminution methods employed included pestle and mortar (manual) and various ball and knife mills (mechanical). The drugs investigated were amphetamine, heroin, cocaine and herbal cannabis. It was shown that comminution of illicit drug materials using these techniques reduces the nominal particle size from approximately 600 μm down to between 200 and 300 μm. It was demonstrated that the choice of 1 g increments for the primary samples of powdered drugs and cannabis resin, which were used in the heterogeneity part of our study (Part I) was correct for the routine quantitative analysis of illicit seized drugs. For herbal cannabis we found that the appropriate increment size was larger. Based on the results of this study we can generally state that: An analytical sample weight of between 20 and 35 mg of an illicit powdered drug, with an assumed purity of 5% or higher, would be considered appropriate and would generate an RSDsampling in the same region as the RSDanalysis for a typical quantitative method of analysis for the most common, powdered, illicit drugs. For herbal cannabis, with an assumed purity of 1% THC (tetrahydrocannabinol) or higher, an analytical sample weight of approximately 200 mg would be appropriate. In Part III we will pull together our homogeneity studies and particle size investigations and use them to devise sampling plans and sample preparations suitable for the quantitative instrumental analysis of the most common illicit

  2. Core size effect on the dry and saturated ultrasonic pulse velocity of limestone samples.

    PubMed

    Ercikdi, Bayram; Karaman, Kadir; Cihangir, Ferdi; Yılmaz, Tekin; Aliyazıcıoğlu, Şener; Kesimal, Ayhan

    2016-12-01

    This study presents the effect of core length on the saturated (UPVsat) and dry (UPVdry) P-wave velocities of four different biomicritic limestone samples, namely light grey (BL-LG), dark grey (BL-DG), reddish (BL-R) and yellow (BL-Y), using core samples having different lengths (25-125mm) at a constant diameter (54.7mm). The saturated P-wave velocity (UPVsat) of all core samples generally decreased with increasing the sample length. However, the dry P-wave velocity (UPVdry) of samples obtained from BL-LG and BL-Y limestones increased with increasing the sample length. In contrast to the literature, the dry P-wave velocity (UPVdry) values of core samples having a length of 75, 100 and 125mm were consistently higher (2.8-46.2%) than those of saturated (UPVsat). Chemical and mineralogical analyses have shown that the P wave velocity is very sensitive to the calcite and clay minerals potentially leading to the weakening/disintegration of rock samples in the presence of water. Severe fluctuations in UPV values were observed to occur between 25 and 75mm sample lengths, thereafter, a trend of stabilization was observed. The maximum variation of UPV values between the sample length of 75mm and 125mm was only 7.3%. Therefore, the threshold core sample length was interpreted as 75mm for UPV measurement in biomicritic limestone samples used in this study.

  3. Core size effect on the dry and saturated ultrasonic pulse velocity of limestone samples.

    PubMed

    Ercikdi, Bayram; Karaman, Kadir; Cihangir, Ferdi; Yılmaz, Tekin; Aliyazıcıoğlu, Şener; Kesimal, Ayhan

    2016-12-01

    This study presents the effect of core length on the saturated (UPVsat) and dry (UPVdry) P-wave velocities of four different biomicritic limestone samples, namely light grey (BL-LG), dark grey (BL-DG), reddish (BL-R) and yellow (BL-Y), using core samples having different lengths (25-125mm) at a constant diameter (54.7mm). The saturated P-wave velocity (UPVsat) of all core samples generally decreased with increasing the sample length. However, the dry P-wave velocity (UPVdry) of samples obtained from BL-LG and BL-Y limestones increased with increasing the sample length. In contrast to the literature, the dry P-wave velocity (UPVdry) values of core samples having a length of 75, 100 and 125mm were consistently higher (2.8-46.2%) than those of saturated (UPVsat). Chemical and mineralogical analyses have shown that the P wave velocity is very sensitive to the calcite and clay minerals potentially leading to the weakening/disintegration of rock samples in the presence of water. Severe fluctuations in UPV values were observed to occur between 25 and 75mm sample lengths, thereafter, a trend of stabilization was observed. The maximum variation of UPV values between the sample length of 75mm and 125mm was only 7.3%. Therefore, the threshold core sample length was interpreted as 75mm for UPV measurement in biomicritic limestone samples used in this study. PMID:27529138

  4. Multiscale sampling of plant diversity: Effects of minimum mapping unit size

    USGS Publications Warehouse

    Stohlgren, T.J.; Chong, G.W.; Kalkhan, M.A.; Schell, L.D.

    1997-01-01

    Only a small portion of any landscape can be sampled for vascular plant diversity because of constraints of cost (salaries, travel time between sites, etc.). Often, the investigator decides to reduce the cost of creating a vegetation map by increasing the minimum mapping unit (MMU), and/or by reducing the number of vegetation classes to be considered. Questions arise about what information is sacrificed when map resolution is decreased. We compared plant diversity patterns from vegetation maps made with 100-ha, 50-ha, 2-ha, and 0.02-ha MMUs in a 754-ha study area in Rocky Mountain National Park, Colorado, United States, using four 0.025-ha and 21 0.1-ha multiscale vegetation plots. We developed and tested species-log(area) curves, correcting the curves for within-vegetation type heterogeneity with Jaccard's coefficients. Total species richness in the study area was estimated from vegetation maps at each resolution (MMU), based on the corrected species-area curves, total area of the vegetation type, and species overlap among vegetation types. With the 0.02-ha MMU, six vegetation types were recovered, resulting in an estimated 552 species (95% CI = 520-583 species) in the 754-ha study area (330 plant species were observed in the 25 plots). With the 2-ha MMU, five vegetation types were recognized, resulting in an estimated 473 species for the study area. With the 50-ha MMU, 439 plant species were estimated for the four vegetation types recognized in the study area. With the 100-ha MMU, only three vegetation types were recognized, resulting in an estimated 341 plant species for the study area. Locally rare species and keystone ecosystems (areas of high or unique plant diversity) were missed at the 2-ha, 50-ha, and 100-ha scales. To evaluate the effects of minimum mapping unit size requires: (1) an initial stratification of homogeneous, heterogeneous, and rare habitat types; and (2) an evaluation of within-type and between-type heterogeneity generated by environmental

  5. Deduction of aerosol size distribution from particle sampling by whisker collectors

    NASA Astrophysics Data System (ADS)

    Schäfer, H. J.; Pfeifer, H. J.

    1983-12-01

    A method of deducing airborne particle size distributions from the deposition on a collector is described. The method basically consists in collecting submicron-sized particles on whisker filters for subsequent electron-microscopic examination. The empirical size distributions on the collectors can be approximated by log-normal functions. Moreover, it has been found that the variation in particle distribution across a four-stage whisker filter can be interpreted on the basis of a simple model of the collection process. The effective absorption coefficient derived from this modeling is used to correct the empirical data for the effect of a selective collection characteristic.

  6. Determination of a representative volume element based on the variability of mechanical properties with sample size in bread.

    PubMed

    Ramírez, Cristian; Young, Ashley; James, Bryony; Aguilera, José M

    2010-10-01

    Quantitative analysis of food structure is commonly obtained by image analysis of a small portion of the material that may not be the representative of the whole sample. In order to quantify structural parameters (air cells) of 2 types of bread (bread and bagel) the concept of representative volume element (RVE) was employed. The RVE for bread, bagel, and gelatin-gel (used as control) was obtained from the relationship between sample size and the coefficient of variation, calculated from the apparent Young's modulus measured on 25 replicates. The RVE was obtained when the coefficient of variation for different sample sizes converged to a constant value. In the 2 types of bread tested, the tendency of the coefficient of variation was to decrease as the sample size increased, while in the homogeneous gelatin-gel, it remained always constant around 2.3% to 2.4%. The RVE resulted to be cubes with sides of 45 mm for bread, 20 mm for bagels, and 10 mm for gelatin-gel (smallest sample tested). The quantitative image analysis as well as visual observation demonstrated that bread presented the largest dispersion of air-cell sizes. Moreover, both the ratio of maximum air-cell area/image area and maximum air-cell height/image height were greater for bread (values of 0.05 and 0.30, respectively) than for bagels (0.03 and 0.20, respectively). Therefore, the size and the size variation of air cells present in the structure determined the size of the RVE. It was concluded that RVE is highly dependent on the heterogeneity of the structure of the types of baked products.

  7. Quantification of physiological levels of vitamin D₃ and 25-hydroxyvitamin D₃ in porcine fat and liver in subgram sample sizes.

    PubMed

    Burild, Anders; Frandsen, Henrik L; Poulsen, Morten; Jakobsen, Jette

    2014-10-01

    Most methods for the quantification of physiological levels of vitamin D3 and 25-hydroxyvitamin D3 are developed for food analysis where the sample size is not usually a critical parameter. In contrast, in life science studies sample sizes are often limited. A very sensitive liquid chromatography with tandem mass spectrometry method was developed to quantify vitamin D3 and 25-hydroxyvitamin D3 simultaneously in porcine tissues. A sample of 0.2-1 g was saponified followed by liquid-liquid extraction and normal-phase solid-phase extraction. The analytes were derivatized with 4-phenyl-1,2,4-triazoline-3,5-dione to improve the ionization efficiency by electrospray ionization. The method was validated in porcine liver and adipose tissue, and the accuracy was determined to be 72-97% for vitamin D3 and 91-124% for 25-hydroxyvitamin D3 . The limit of quantification was <0.1 ng/g, and the precision varied between 1.4 and 16% depending on the level of spiking. The small sample size required for the described method enables quantification of vitamin D3 and 25-hydroxyvitamin D3 in tissues from studies where sample sizes are limited.

  8. Evaluation of sampling sizes on the intertidal macroinfauna assessment in a subtropical mudflat of Hong Kong.

    PubMed

    Shen, Ping-Ping; Zhou, Hong; Zhao, Zhenye; Yu, Xiao-Zhang; Gu, Ji-Dong

    2012-08-01

    In this study, two types of sediment cores with different diameters were used to collect sediment samples from an intertidal mudflat in Hong Kong to investigate the influence of sampling unit on the quantitative assessment of benthic macroinfaunal communities. Both univariate and multivariate analyses were employed to detect differences in sampling efficiencies by the two samplers through total abundance and biomass, species richness and diversity, community structure, relative abundance of major taxa of the infaunal community. The species-area curves were further compared to find out the influence of the sampling units. Results showed that the two sampling devices provided similar information on the estimates of species diversity, density and species composition of the benthos in main part of the mudflat where the sediment was fine and homogenous; but at the station which contained coarse sand and gravels, the significant differences were detected between the quantitative assessments of macrobenthic infauna by the two samplers. Most importantly, the species-area curves indicated that more and smaller samples were better in capturing more species than less large ones when comparing an equal sampling area. Therefore, the efficiency of the sampler largely depended on the sediment properties, and sampling devices must be chosen based on the physical conditions and desired levels of precision on the organisms of the sampling program. PMID:22766844

  9. Evaluation of sampling sizes on the intertidal macroinfauna assessment in a subtropical mudflat of Hong Kong.

    PubMed

    Shen, Ping-Ping; Zhou, Hong; Zhao, Zhenye; Yu, Xiao-Zhang; Gu, Ji-Dong

    2012-08-01

    In this study, two types of sediment cores with different diameters were used to collect sediment samples from an intertidal mudflat in Hong Kong to investigate the influence of sampling unit on the quantitative assessment of benthic macroinfaunal communities. Both univariate and multivariate analyses were employed to detect differences in sampling efficiencies by the two samplers through total abundance and biomass, species richness and diversity, community structure, relative abundance of major taxa of the infaunal community. The species-area curves were further compared to find out the influence of the sampling units. Results showed that the two sampling devices provided similar information on the estimates of species diversity, density and species composition of the benthos in main part of the mudflat where the sediment was fine and homogenous; but at the station which contained coarse sand and gravels, the significant differences were detected between the quantitative assessments of macrobenthic infauna by the two samplers. Most importantly, the species-area curves indicated that more and smaller samples were better in capturing more species than less large ones when comparing an equal sampling area. Therefore, the efficiency of the sampler largely depended on the sediment properties, and sampling devices must be chosen based on the physical conditions and desired levels of precision on the organisms of the sampling program.

  10. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach

    PubMed Central

    Boitard, Simon; Rodríguez, Willy; Jay, Flora; Mona, Stefano; Austerlitz, Frédéric

    2016-01-01

    Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey), PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles. PMID:26943927

  11. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    PubMed

    Boitard, Simon; Rodríguez, Willy; Jay, Flora; Mona, Stefano; Austerlitz, Frédéric

    2016-03-01

    Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey), PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  12. The effect of the sample size and location on contrast ultrasound measurement of perfusion parameters.

    PubMed

    Leinonen, Merja R; Raekallio, Marja R; Vainio, Outi M; Ruohoniemi, Mirja O; O'Brien, Robert T

    2011-01-01

    Contrast-enhanced ultrasound can be used to quantify tissue perfusion based on region of interest (ROI) analysis. The effect of the location and size of the ROI on the obtained perfusion parameters has been described in phantom, ex vivo and in vivo studies. We assessed the effects of location and size of the ROI on perfusion parameters in the renal cortex of 10 healthy, anesthetized cats using Definity contrast-enhanced ultrasound to estimate the importance of the ROI on quantification of tissue perfusion with contrast-enhanced ultrasound. Three separate sets of ROIs were placed in the renal cortex, varying in location, size or depth. There was a significant inverse association between increased depth or increased size of the ROI and peak intensity (P < 0.05). There was no statistically significant difference in the peak intensity between the ROIs placed in a row in the near field cortex. There was no significant difference in the ROIs with regard to arrival time, time to peak intensity and wash-in rate. When comparing two different ROIs in a patient with focal lesions, such as suspected neoplasia or infarction, the ROIs should always be placed at same depth and be as similar in size as possible.

  13. Characterizing the size distribution of particles in urban stormwater by use of fixed-point sample-collection methods

    USGS Publications Warehouse

    Selbig, William R.; Bannerman, Roger T.

    2011-01-01

    The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.

  14. Method to study sample object size limit of small-angle x-ray scattering computed tomography

    NASA Astrophysics Data System (ADS)

    Choi, Mina; Ghammraoui, Bahaa; Badal, Andreu; Badano, Aldo

    2016-03-01

    Small-angle x-ray scattering (SAXS) imaging is an emerging medical tool that can be used for in vivo detailed tissue characterization and has the potential to provide added contrast to conventional x-ray projection and CT imaging. We used a publicly available MC-GPU code to simulate x-ray trajectories in a SAXS-CT geometry for a target material embedded in a water background material with varying sample sizes (1, 3, 5, and 10 mm). Our target materials were water solution of gold nanoparticle (GNP) spheres with a radius of 6 nm and a water solution with dissolved serum albumin (BSA) proteins due to their well-characterized scatter profiles at small angles and highly scattering properties. The background material was water. Our objective is to study how the reconstructed scatter profile degrades at larger target imaging depths and increasing sample sizes. We have found that scatter profiles of the GNP in water can still be reconstructed at depths up to 5 mm embedded at the center of a 10 mm sample. Scatter profiles of BSA in water were also reconstructed at depths up to 5 mm in a 10 mm sample but with noticeable signal degradation as compared to the GNP sample. This work presents a method to study the sample size limits for future SAXS-CT imaging systems.

  15. 45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... Follow-Up Populations 1. Using Finite Population Correction The Finite Population Correction (FPC) is applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more...

  16. 45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... Follow-Up Populations 1. Using Finite Population Correction The Finite Population Correction (FPC) is applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more...

  17. 45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... Follow-Up Populations 1. Using Finite Population Correction The Finite Population Correction (FPC) is applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more...

  18. 45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... Follow-Up Populations 1. Using Finite Population Correction The Finite Population Correction (FPC) is applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more...

  19. The accuracy of instrumental neutron activation analysis of kilogram-size inhomogeneous samples.

    PubMed

    Blaauw, M; Lakmaker, O; van Aller, P

    1997-07-01

    The feasibility of quantitative instrumental neutron activation analysis (INAA) of samples in the kilogram range without internal standardization has been demonstrated by Overwater et al. (Anal. Chem. 1996, 68, 341). In their studies, however, they demonstrated only the agreement between the "corrected" γ ray spectrum of homogeneous large samples and that of small samples of the same material. In this paper, the k(0) calibration of the IRI facilities for large samples is described, and, this time in terms of (trace) element concentrations, some of Overwater's results for homogeneous materials are presented again, as well as results obtained from inhomogeneous materials and subsamples thereof. It is concluded that large-sample INAA can be as accurate as ordinary INAA, even when applied to inhomogeneous materials.

  20. Samples from subdivided populations yield biased estimates of effective size that overestimate the rate of loss of genetic variation

    PubMed Central

    Ryman, Nils; Allendorf, Fred W; Jorde, Per Erik; Laikre, Linda; Hössjer, Ola

    2014-01-01

    Many empirical studies estimating effective population size apply the temporal method that provides an estimate of the variance effective size through the amount of temporal allele frequency change under the assumption that the study population is completely isolated. This assumption is frequently violated, and the magnitude of the resulting bias is generally unknown. We studied how gene flow affects estimates of effective size obtained by the temporal method when sampling from a population system and provide analytical expressions for the expected estimate under an island model of migration. We show that the temporal method tends to systematically underestimate both local and global effective size when populations are connected by gene flow, and the bias is sometimes dramatic. The problem is particularly likely to occur when sampling from a subdivided population where high levels of gene flow obscure identification of subpopulation boundaries. In such situations, sampling in a manner that prevents biased estimates can be difficult. This phenomenon might partially explain the frequently reported unexpectedly low effective population sizes of marine populations that have raised concern regarding the genetic vulnerability of even exceptionally large populations. PMID:24034449

  1. Sample Size Considerations in Clinical Trials when Comparing Two Interventions using Multiple Co-Primary Binary Relative Risk Contrasts

    PubMed Central

    Ando, Yuki; Hamasaki, Toshimitsu; Evans, Scott R.; Asakura, Koko; Sugimoto, Tomoyuki; Sozu, Takashi; Ohno, Yuko

    2015-01-01

    The effects of interventions are multi-dimensional. Use of more than one primary endpoint offers an attractive design feature in clinical trials as they capture more complete characterization of the effects of an intervention and provide more informative intervention comparisons. For these reasons, multiple primary endpoints have become a common design feature in many disease areas such as oncology, infectious disease, and cardiovascular disease. More specifically in medical product development, multiple endpoints are utilized as co-primary to evaluate the effect of the new interventions. Although methodologies to address continuous co-primary endpoints are well-developed, methodologies for binary endpoints are limited. In this paper, we describe power and sample size determination for clinical trials with multiple correlated binary endpoints, when relative risks are evaluated as co-primary. We consider a scenario where the objective is to evaluate evidence for superiority of a test intervention compared with a control intervention, for all of the relative risks. We discuss the normal approximation methods for power and sample size calculations and evaluate how the required sample size, power and Type I error vary as a function of the correlations among the endpoints. Also we discuss a simple, but conservative procedure for appropriate sample size calculation. We then extend the methods allowing for interim monitoring using group-sequential methods. PMID:26167243

  2. Inert gases in a terra sample - Measurements in six grain-size fractions and two single particles from Lunar 20.

    NASA Technical Reports Server (NTRS)

    Heymann, D.; Lakatos, S.; Walton, J. R.

    1973-01-01

    Review of the results of inert gas measurements performed on six grain-size fractions and two single particles from four samples of Luna 20 material. Presented and discussed data include the inert gas contents, element and isotope systematics, radiation ages, and Ar-36/Ar-40 systematics.

  3. Population Validity and Cross-Validity: Applications of Distribution Theory for Testing Hypotheses, Setting Confidence Intervals, and Determining Sample Size

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.

    2008-01-01

    Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)

  4. Sample Size and Power Estimates for a Confirmatory Factor Analytic Model in Exercise and Sport: A Monte Carlo Approach

    ERIC Educational Resources Information Center

    Myers, Nicholas D.; Ahn, Soyeon; Jin, Ying

    2011-01-01

    Monte Carlo methods can be used in data analytic situations (e.g., validity studies) to make decisions about sample size and to estimate power. The purpose of using Monte Carlo methods in a validity study is to improve the methodological approach within a study where the primary focus is on construct validity issues and not on advancing…

  5. Sampling date, leaf age and root size: implications for the study of plant C:N:p stoichiometry.

    PubMed

    Zhang, Haiyang; Wu, Honghui; Yu, Qiang; Wang, Zhengwen; Wei, Cunzheng; Long, Min; Kattge, Jens; Smith, Melinda; Han, Xingguo

    2013-01-01

    Plant carbon : nitrogen : phosphorus (C:N:P) ratios are powerful indicators of diverse ecological processes. During plant development and growth, plant C:N:P stoichiometry responds to environmental conditions and physiological constraints. However, variations caused by effects of sampling (i.e. sampling date, leaf age and root size) often have been neglected in previous studies. We investigated the relative contributions of sampling date, leaf age, root size and species identity to stoichiometric flexibility in a field mesocosm study and a natural grassland in Inner Mongolia. We found that sampling date, leaf age, root size and species identity all significantly affected C:N:P stoichiometry both in the pot study as well as in the field. Overall, C:N and C:P ratios increased significantly over time and with increasing leaf age and root size, while the dynamics of N:P ratios depended on species identity. Our results suggest that attempts to synthesize C:N:P stoichiometry data across studies that span regional to global scales and include many species need to better account for temporal variation.

  6. A Comparison of the Exact Kruskal-Wallis Distribution to Asymptotic Approximations for All Sample Sizes up to 105

    ERIC Educational Resources Information Center

    Meyer, J. Patrick; Seaman, Michael A.

    2013-01-01

    The authors generated exact probability distributions for sample sizes up to 35 in each of three groups ("n" less than or equal to 105) and up to 10 in each of four groups ("n" less than or equal to 40). They compared the exact distributions to the chi-square, gamma, and beta approximations. The beta approximation was best in…

  7. Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach

    ERIC Educational Resources Information Center

    Rotondi, Michael A.; Donner, Allan

    2009-01-01

    The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…

  8. Does size matter? An investigation into the Rey Complex Figure in a pediatric clinical sample.

    PubMed

    Loughan, Ashlee R; Perna, Robert B; Galbreath, Jennifer D

    2014-01-01

    The Rey Complex Figure Test (RCF) copy requires visuoconstructional skills and significant attentional, organizational, and problem-solving skills. Most scoring schemes codify a subset of the details involved in figure construction. Research is unclear regarding the meaning of figure size. The research hypothesis of our inquiry is that size of the RCF copy will have neuropsychological significance. Data from 95 children (43 girls, 52 boys; ages 6-18 years) with behavioral and academic issues revealed that larger figure drawings were associated with higher RCF total scores and significantly higher scores across many neuropsychological tests including the Wechsler Individual Achievement Test-Second Edition (WIAT-II) Word Reading (F = 5.448, p = .022), WIAT-II Math Reasoning (F = 6.365, p = .013), Children's Memory Scale Visual Delay (F = 4.015, p = .048), Trail-Making Test-Part A (F = 5.448, p = .022), and RCF Recognition (F = 4.862, p = .030). Results indicated that wider figures were associated with higher cognitive functioning, which may be part of an adaptive strategy in helping facilitate accurate and relative proportions of the complex details presented in the RCF. Overall, this study initiates the investigation of the RCF size and the relationship between size and a child's neuropsychological profile. PMID:24236943

  9. COMPARISON OF BIOLOGICAL COMMUNITIES: THE PROBLEM OF SAMPLE REPRESENTATIVENESS

    EPA Science Inventory

    Obtaining an adequate, representative sample of biological communities or assemblages to make richness or compositional comparisons among sites is a continuing challenge. Traditionally, sample size is based on numbers of replicates or area collected or numbers of individuals enum...

  10. Estimating sample size for landscape-scale mark-recapture studies of North American migratory tree bats

    USGS Publications Warehouse

    Ellison, Laura E.; Lukacs, Paul M.

    2014-01-01

    Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.

  11. Efficient inference of population size histories and locus-specific mutation rates from large-sample genomic variation data

    PubMed Central

    Bhaskar, Anand; Wang, Y.X. Rachel; Song, Yun S.

    2015-01-01

    With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. PMID:25564017

  12. The Effect of Grain Size on Radon Exhalation Rate in Natural-dust and Stone-dust Samples

    NASA Astrophysics Data System (ADS)

    Kumari, Raj; Kant, Krishan; Garg, Maneesha

    Radiation dose to human population due to inhalation of radon and its progeny contributes more than 50% of the total dose from the natural sources which is the second leading cause of lung cancer after smoking. In the present work the dependence of radon exhalation rate on the physical sample parameters of stone dust and natural dust were studied. The samples under study were first crushed, grinded, dried and then passed through sieves with different pore sizes to get samples of various grain sizes (μm). The average value of radon mass exhalation rate is 5.95±2.7 mBqkg-1hr-1 and average value of radon surface exhalation rate is 286±36 mBqm-2 hr-1 for stone dust, and the average value of radon mass exhalation rate is 9.02±5.37 mBqkg-1hr-1 and average value of radon surface exhalation rate is 360±67 mBqm-2 hr-1 for natural dust. The exhalation rate was found to increase with the increase in grain size of the sample. The obtained values of radon exhalation rate for all the samples are found to be under the radon exhalation rate limit reported worldwide.

  13. Organic Composition of Size-Segregated Aerosols Sampled During the 2002 Bay Regional Atmospheric Chemistry Experiment (BRACE), Florida, USA

    NASA Astrophysics Data System (ADS)

    Tremblay, R. T.; Zika, R. G.

    2003-04-01

    Aerosol samples were collected for the analysis of organic source markers using non-rotating Micro Orifice Uniform Deposit Impactors (MOUDI) as part of the Bay Regional Atmospheric Chemistry Experiment (BRACE) in Tampa, FL, USA. Daily samples were collected 12 m above ground at a flow rate of 30 lpm throughout the month of May 2002. Aluminum foil discs were used to sample aerosol size fractions with aerodynamic cut diameter of 18, 10, 5.6, 3.2, 1.8, 1.0, 0.56, 0.32, 0.17 and 0.093 um. Samples were solvent extracted using a mixture of dichloromethane/acetone/hexane, concentrated and then analyzed using gas chromatography-mass spectrometry (GC/MS). Low detection limits were achieved using a HP Programmable Temperature Vaporizing inlet (PTV) and large volume injections (80ul). Excellent chromatographic resolution was obtained using a 60 m long RTX-5MS, 0.25 mm I.D. column. A quantification method was built for over 90 organic compounds chosen as source markers including straight/iso/anteiso alkanes and polycyclic aromatic hydrocarbons (PAH). The investigation of potential aerosol sources for different particle sizes using known organic markers and source profiles will be presented. Size distributions of carbon preference indices (CPI), percent wax n-alkanes (%WNA) and concentration of selected compounds will be discussed. Also, results will be compared with samples acquired in different environments including the 1999 Atlanta SuperSite Experiment, GA, USA.

  14. A novel in situ method for sampling urban soil dust: particle size distribution, trace metal concentrations, and stable lead isotopes.

    PubMed

    Bi, Xiangyang; Liang, Siyuan; Li, Xiangdong

    2013-06-01

    In this study, a novel in situ sampling method was utilized to investigate the concentrations of trace metals and Pb isotope compositions among different particle size fractions in soil dust, bulk surface soil, and corresponding road dust samples collected within an urban environment. The aim of the current study was to evaluate the feasibility of using soil dust samples to determine trace metal contamination and potential risks in urban areas in comparison with related bulk surface soil and road dust. The results of total metal loadings and Pb isotope ratios revealed that soil dust is more sensitive than bulk surface soil to anthropogenic contamination in urban areas. The new in situ method is effective at collecting different particle size fractions of soil dust from the surface of urban soils, and that soil dust is a critical indicator of anthropogenic contamination and potential human exposure in urban settings.

  15. A Monte Carlo approach to estimate the uncertainty in soil CO2 emissions caused by spatial and sample size variability.

    PubMed

    Shi, Wei-Yu; Su, Li-Jun; Song, Yi; Ma, Ming-Guo; Du, Sheng

    2015-10-01

    The soil CO2 emission is recognized as one of the largest fluxes in the global carbon cycle. Small errors in its estimation can result in large uncertainties and have important consequences for climate model predictions. Monte Carlo approach is efficient for estimating and reducing spatial scale sampling errors. However, that has not been used in soil CO2 emission studies. Here, soil respiration data from 51 PVC collars were measured within farmland cultivated by maize covering 25 km(2) during the growing season. Based on Monte Carlo approach, optimal sample sizes of soil temperature, soil moisture, and soil CO2 emission were determined. And models of soil respiration can be effectively assessed: Soil temperature model is the most effective model to increasing accuracy among three models. The study demonstrated that Monte Carlo approach may improve soil respiration accuracy with limited sample size. That will be valuable for reducing uncertainties of global carbon cycle.

  16. A Monte Carlo approach to estimate the uncertainty in soil CO2 emissions caused by spatial and sample size variability.

    PubMed

    Shi, Wei-Yu; Su, Li-Jun; Song, Yi; Ma, Ming-Guo; Du, Sheng

    2015-10-01

    The soil CO2 emission is recognized as one of the largest fluxes in the global carbon cycle. Small errors in its estimation can result in large uncertainties and have important consequences for climate model predictions. Monte Carlo approach is efficient for estimating and reducing spatial scale sampling errors. However, that has not been used in soil CO2 emission studies. Here, soil respiration data from 51 PVC collars were measured within farmland cultivated by maize covering 25 km(2) during the growing season. Based on Monte Carlo approach, optimal sample sizes of soil temperature, soil moisture, and soil CO2 emission were determined. And models of soil respiration can be effectively assessed: Soil temperature model is the most effective model to increasing accuracy among three models. The study demonstrated that Monte Carlo approach may improve soil respiration accuracy with limited sample size. That will be valuable for reducing uncertainties of global carbon cycle. PMID:26664693

  17. Use of High-Frequency In-Home Monitoring Data May Reduce Sample Sizes Needed in Clinical Trials

    PubMed Central

    Dodge, Hiroko H.; Zhu, Jian; Mattek, Nora C.; Austin, Daniel; Kornfeld, Judith; Kaye, Jeffrey A.

    2015-01-01

    Background Trials in Alzheimer’s disease are increasingly focusing on prevention in asymptomatic individuals. This poses a challenge in examining treatment effects since currently available approaches are often unable to detect cognitive and functional changes among asymptomatic individuals. Resultant small effect sizes require large sample sizes using biomarkers or secondary measures for randomized controlled trials (RCTs). Better assessment approaches and outcomes capable of capturing subtle changes during asymptomatic disease stages are needed. Objective We aimed to develop a new approach to track changes in functional outcomes by using individual-specific distributions (as opposed to group-norms) of unobtrusive continuously monitored in-home data. Our objective was to compare sample sizes required to achieve sufficient power to detect prevention trial effects in trajectories of outcomes in two scenarios: (1) annually assessed neuropsychological test scores (a conventional approach), and (2) the likelihood of having subject-specific low performance thresholds, both modeled as a function of time. Methods One hundred nineteen cognitively intact subjects were enrolled and followed over 3 years in the Intelligent Systems for Assessing Aging Change (ISAAC) study. Using the difference in empirically identified time slopes between those who remained cognitively intact during follow-up (normal control, NC) and those who transitioned to mild cognitive impairment (MCI), we estimated comparative sample sizes required to achieve up to 80% statistical power over a range of effect sizes for detecting reductions in the difference in time slopes between NC and MCI incidence before transition. Results Sample size estimates indicated approximately 2000 subjects with a follow-up duration of 4 years would be needed to achieve a 30% effect size when the outcome is an annually assessed memory test score. When the outcome is likelihood of low walking speed defined using the

  18. Size separation method for absorption characterization in brown carbon: Application to an aged biomass burning sample

    NASA Astrophysics Data System (ADS)

    Di Lorenzo, Robert A.; Young, Cora J.

    2016-01-01

    The majority of brown carbon (BrC) in atmospheric aerosols is derived from biomass burning (BB) and is primarily composed of extremely low volatility organic carbons. We use two chromatographic methods to compare the contribution of large and small light-absorbing BrC components in aged BB aerosols with UV-vis absorbance detection: (1) size exclusion chromatography (SEC) and (2) reverse phase high-performance liquid chromatography. We observe no evidence of small molecule absorbers. Most BrC absorption arises from large molecular weight components (>1000 amu). This suggests that although small molecules may contribute to BrC absorption near the BB source, analyses of aerosol extracts should use methods selective to large molecular weight compounds because these species may be responsible for long-term BrC absorption. Further characterization with electrospray ionization mass spectrometry (MS) coupled to SEC demonstrates an underestimation of the molecular size determined through MS as compared to SEC.

  19. Correlation Between The Size Of Nd60Fe30Al10 Sample, Cast By Various Techniques And Its Coercivity

    NASA Astrophysics Data System (ADS)

    Kaszuwara, W.; Michalski, B.; Pawlik, P.; Latuch, J.

    2011-06-01

    The present study is concerned with the correlation between the magnetic properties of the Nd60Fe30Al10 sample and its size, in particular its dimension measured in the direction of heat removal. We compared samples produced using the three methods: melt-spinning, die casting under pressure, and suction casting. The samples were of various shapes such as ribbons, plates, rods and pipes. We found that despite the differences in the shape of the samples and the technique of their casting, their magnetic properties did not differ significantly. Hence we may conclude that it is the sample dimension measured along the direction perpendicular to the heat-removing surface which is the parameter that mostly decides about the cooling rate.

  20. Strategies for minimizing sample size for use in airborne LiDAR-based forest inventory

    USGS Publications Warehouse

    Junttila, Virpi; Finley, Andrew O.; Bradford, John B.; Kauranne, Tuomo

    2013-01-01

    Recently airborne Light Detection And Ranging (LiDAR) has emerged as a highly accurate remote sensing modality to be used in operational scale forest inventories. Inventories conducted with the help of LiDAR are most often model-based, i.e. they use variables derived from LiDAR point clouds as the predictive variables that are to be calibrated using field plots. The measurement of the necessary field plots is a time-consuming and statistically sensitive process. Because of this, current practice often presumes hundreds of plots to be collected. But since these plots are only used to calibrate regression models, it should be possible to minimize the number of plots needed by carefully selecting the plots to be measured. In the current study, we compare several systematic and random methods for calibration plot selection, with the specific aim that they be used in LiDAR based regression models for forest parameters, especially above-ground biomass. The primary criteria compared are based on both spatial representativity as well as on their coverage of the variability of the forest features measured. In the former case, it is important also to take into account spatial auto-correlation between the plots. The results indicate that choosing the plots in a way that ensures ample coverage of both spatial and feature space variability improves the performance of the corresponding models, and that adequate coverage of the variability in the feature space is the most important condition that should be met by the set of plots collected.

  1. Sampling size in the verification of manufactured-supplied air kerma strengths

    SciTech Connect

    Ramos, Luis Isaac; Martinez Monge, Rafael

    2005-11-15

    Quality control mandate that the air kerma strengths (S{sub K}) of permanent seeds be verified, this is usually done by statistics inferred from 10% of the seeds. The goal of this paper is to proposed a new sampling method in which the number of seeds to be measured will be set beforehand according to an a priori statistical level of uncertainty. The results are based on the assumption that the S{sub K} has a normal distribution. To demonstrate this, the S{sub K} of each of the seeds measured was corrected to ensure that the average S{sub K} of its sample remained the same. In this process 2030 results were collected and analyzed using a normal plot. In our opinion, the number of seeds sampled should be determined beforehand according to an a priori level of statistical uncertainty.

  2. Effect of dislocation pile-up on size-dependent yield strength in finite single-crystal micro-samples

    SciTech Connect

    Pan, Bo; Shibutani, Yoji; Zhang, Xu; Shang, Fulin

    2015-07-07

    Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources and pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.

  3. A multi-scale study of Orthoptera species richness and human population size controlling for sampling effort

    NASA Astrophysics Data System (ADS)

    Cantarello, Elena; Steck, Claude E.; Fontana, Paolo; Fontaneto, Diego; Marini, Lorenzo; Pautasso, Marco

    2010-03-01

    Recent large-scale studies have shown that biodiversity-rich regions also tend to be densely populated areas. The most obvious explanation is that biodiversity and human beings tend to match the distribution of energy availability, environmental stability and/or habitat heterogeneity. However, the species-people correlation can also be an artefact, as more populated regions could show more species because of a more thorough sampling. Few studies have tested this sampling bias hypothesis. Using a newly collated dataset, we studied whether Orthoptera species richness is related to human population size in Italy’s regions (average area 15,000 km2) and provinces (2,900 km2). As expected, the observed number of species increases significantly with increasing human population size for both grain sizes, although the proportion of variance explained is minimal at the provincial level. However, variations in observed Orthoptera species richness are primarily associated with the available number of records, which is in turn well correlated with human population size (at least at the regional level). Estimated Orthoptera species richness (Chao2 and Jackknife) also increases with human population size both for regions and provinces. Both for regions and provinces, this increase is not significant when controlling for variation in area and number of records. Our study confirms the hypothesis that broad-scale human population-biodiversity correlations can in some cases be artefactual. More systematic sampling of less studied taxa such as invertebrates is necessary to ascertain whether biogeographical patterns persist when sampling effort is kept constant or included in models.

  4. Item Characteristic Curve Parameters: Effects of Sample Size on Linear Equating.

    ERIC Educational Resources Information Center

    Ree, Malcom James; Jensen, Harald E.

    By means of computer simulation of test responses, the reliability of item analysis data and the accuracy of equating were examined for hypothetical samples of 250, 500, 1000, and 2000 subjects for two tests with 20 equating items plus 60 additional items on the same scale. Birnbaum's three-parameter logistic model was used for the simulation. The…

  5. Assessing the Dimensionality of Item Response Matrices with Small Sample Sizes and Short Test Lengths.

    ERIC Educational Resources Information Center

    De Champlain, Andre; Gessaroli, Marc E.

    1998-01-01

    Type I error rates and rejection rates for three-dimensionality assessment procedures were studied with data sets simulated to reflect short tests and small samples. Results show that the G-squared difference test (D. Bock, R. Gibbons, and E. Muraki, 1988) suffered from a severely inflated Type I error rate at all conditions simulated. (SLD)

  6. Asbestos/NESHAP adequately wet guidance

    SciTech Connect

    Shafer, R.; Throwe, S.; Salgado, O.; Garlow, C.; Hoerath, E.

    1990-12-01

    The Asbestos NESHAP requires facility owners and/or operators involved in demolition and renovation activities to control emissions of particulate asbestos to the outside air because no safe concentration of airborne asbestos has ever been established. The primary method used to control asbestos emissions is to adequately wet the Asbestos Containing Material (ACM) with a wetting agent prior to, during and after demolition/renovation activities. The purpose of the document is to provide guidance to asbestos inspectors and the regulated community on how to determine if friable ACM is adequately wet as required by the Asbestos NESHAP.

  7. In situ detection of small-size insect pests sampled on traps using multifractal analysis

    NASA Astrophysics Data System (ADS)

    Xia, Chunlei; Lee, Jang-Myung; Li, Yan; Chung, Bu-Keun; Chon, Tae-Soo

    2012-02-01

    We introduce a multifractal analysis for detecting the small-size pest (e.g., whitefly) images from a sticky trap in situ. An automatic attraction system is utilized for collecting pests from greenhouse plants. We applied multifractal analysis to segment action of whitefly images based on the local singularity and global image characteristics. According to the theory of multifractal dimension, the candidate blobs of whiteflies are initially defined from the sticky-trap image. Two schemes, fixed thresholding and regional minima obtainment, were utilized for feature extraction of candidate whitefly image areas. The experiment was conducted with the field images in a greenhouse. Detection results were compared with other adaptive segmentation algorithms. Values of F measuring precision and recall score were higher for the proposed multifractal analysis (96.5%) compared with conventional methods such as Watershed (92.2%) and Otsu (73.1%). The true positive rate of multifractal analysis was 94.3% and the false positive rate minimal level at 1.3%. Detection performance was further tested via human observation. The degree of scattering between manual and automatic counting was remarkably higher with multifractal analysis (R2=0.992) compared with Watershed (R2=0.895) and Otsu (R2=0.353), ensuring overall detection of the small-size pests is most feasible with multifractal analysis in field conditions.

  8. A spectroscopic sample of massive, quiescent z ∼ 2 galaxies: implications for the evolution of the mass-size relation

    SciTech Connect

    Krogager, J.-K.; Zirm, A. W.; Toft, S.; Man, A.; Brammer, G.

    2014-12-10

    We present deep, near-infrared Hubble Space Telescope/Wide Field Camera 3 grism spectroscopy and imaging for a sample of 14 galaxies at z ≈ 2 selected from a mass-complete photometric catalog in the COSMOS field. By combining the grism observations with photometry in 30 bands, we derive accurate constraints on their redshifts, stellar masses, ages, dust extinction, and formation redshifts. We show that the slope and scatter of the z ∼ 2 mass-size relation of quiescent galaxies is consistent with the local relation, and confirm previous findings that the sizes for a given mass are smaller by a factor of two to three. Finally, we show that the observed evolution of the mass-size relation of quiescent galaxies between z = 2 and 0 can be explained by the quenching of increasingly larger star forming galaxies at a rate dictated by the increase in the number density of quiescent galaxies with decreasing redshift. However, we find that the scatter in the mass-size relation should increase in the quenching-driven scenario in contrast to what is seen in the data. This suggests that merging is not needed to explain the evolution of the median mass-size relation of massive galaxies, but may still be required to tighten its scatter, and explain the size growth of individual z = 2 galaxies quiescent galaxies.

  9. Second generation laser-heated microfurnace for the preparation of microgram-sized graphite samples

    NASA Astrophysics Data System (ADS)

    Yang, Bin; Smith, A. M.; Long, S.

    2015-10-01

    We present construction details and test results for two second-generation laser-heated microfurnaces (LHF-II) used to prepare graphite samples for Accelerator Mass Spectrometry (AMS) at ANSTO. Based on systematic studies aimed at optimising the performance of our prototype laser-heated microfurnace (LHF-I) (Smith et al., 2007 [1]; Smith et al., 2010 [2,3]; Yang et al., 2014 [4]), we have designed the LHF-II to have the following features: (i) it has a small reactor volume of 0.25 mL allowing us to completely graphitise carbon dioxide samples containing as little as 2 μg of C, (ii) it can operate over a large pressure range (0-3 bar) and so has the capacity to graphitise CO2 samples containing up to 100 μg of C; (iii) it is compact, with three valves integrated into the microfurnace body, (iv) it is compatible with our new miniaturised conventional graphitisation furnaces (MCF), also designed for small samples, and shares a common vacuum system. Early tests have shown that the extraneous carbon added during graphitisation in each LHF-II is of the order of 0.05 μg, assuming 100 pMC activity, similar to that of the prototype unit. We use a 'budget' fibre packaged array for the diode laser with custom built focusing optics. The use of a new infrared (IR) thermometer with a short focal length has allowed us to decrease the height of the light-proof safety enclosure. These innovations have produced a cheaper and more compact device. As with the LHF-I, feedback control of the catalyst temperature and logging of the reaction parameters is managed by a LabVIEW interface.

  10. Quantifying Density Fluctuations in Volumes of All Shapes and Sizes Using Indirect Umbrella Sampling

    NASA Astrophysics Data System (ADS)

    Patel, Amish J.; Varilly, Patrick; Chandler, David; Garde, Shekhar

    2011-10-01

    Water density fluctuations are an important statistical mechanical observable and are related to many-body correlations, as well as hydrophobic hydration and interactions. Local water density fluctuations at a solid-water surface have also been proposed as a measure of its hydrophobicity. These fluctuations can be quantified by calculating the probability, P v ( N), of observing N waters in a probe volume of interest v. When v is large, calculating P v ( N) using molecular dynamics simulations is challenging, as the probability of observing very few waters is exponentially small, and the standard procedure for overcoming this problem (umbrella sampling in N) leads to undesirable impulsive forces. Patel et al. (J. Phys. Chem. B 114:1632, 2010) have recently developed an indirect umbrella sampling (INDUS) method, that samples a coarse-grained particle number to obtain P v ( N) in cuboidal volumes. Here, we present and demonstrate an extension of that approach to volumes of other basic shapes, like spheres and cylinders, as well as to collections of such volumes. We further describe the implementation of INDUS in the NPT ensemble and calculate P v ( N) distributions over a broad range of pressures. Our method may be of particular interest in characterizing the hydrophobicity of interfaces of proteins, nanotubes and related systems.

  11. Supervision of Student Teachers: How Adequate?

    ERIC Educational Resources Information Center

    Dean, Ken

    This study attempted to ascertain how adequately student teachers are supervised by college supervisors and supervising teachers. Questions to be answered were as follows: a) How do student teachers rate the adequacy of supervision given them by college supervisors and supervising teachers? and b) Are there significant differences between ratings…

  12. Small Rural Schools CAN Have Adequate Curriculums.

    ERIC Educational Resources Information Center

    Loustaunau, Martha

    The small rural school's foremost and largest problem is providing an adequate curriculum for students in a changing world. Often the small district cannot or is not willing to pay the per-pupil cost of curriculum specialists, specialized courses using expensive equipment no more than one period a day, and remodeled rooms to accommodate new…

  13. Toward More Adequate Quantitative Instructional Research.

    ERIC Educational Resources Information Center

    VanSickle, Ronald L.

    1986-01-01

    Sets an agenda for improving instructional research conducted with classical quantitative experimental or quasi-experimental methodology. Includes guidelines regarding the role of a social perspective, adequate conceptual and operational definition, quality instrumentation, control of threats to internal and external validity, and the use of…

  14. An Adequate Education Defined. Fastback 476.

    ERIC Educational Resources Information Center

    Thomas, M. Donald; Davis, E. E. (Gene)

    Court decisions historically have dealt with educational equity; now they are helping to establish "adequacy" as a standard in education. Legislatures, however, have been slow to enact remedies. One debate over education adequacy, though, is settled: Schools are not financed at an adequate level. This fastback is divided into three sections.…

  15. Funding the Formula Adequately in Oklahoma

    ERIC Educational Resources Information Center

    Hancock, Kenneth

    2015-01-01

    This report is a longevity, simulational study that looks at how the ratio of state support to local support effects the number of school districts that breaks the common school's funding formula which in turns effects the equity of distribution to the common schools. After nearly two decades of adequately supporting the funding formula, Oklahoma…

  16. Experimental and theoretical investigation of the effects of sample size on copper plasma immersion ion implantation into polyethylene

    SciTech Connect

    Zhang Wei; Wu Zhengwei; Liu Chenglong; Pu Shihao; Zhang Wenjun; Chu, Paul K.

    2007-06-01

    Polymers are frequently surface modified to achieve special surface characteristics such as antibacterial properties, wear resistance, antioxidation, and good appearance. The application of metal plasma immersion ion implantation (PIII) to polymers is of practical interest as PIII offers advantages such as low costs, small instrument footprint, large area, and conformal processing capability. However, the insulating nature of most polymers usually leads to nonuniform plasma implantation and the surface properties can be adversely impacted. Copper is an antibacterial element and our previous experiments have shown that proper introduction of Cu by plasma implantation can significantly enhance the long-term antibacterial properties of polymers. However, lateral variations in the implant fluence and implantation depth across the insulating substrate can lead to inconsistent and irreproducible antibacterial effects. In this work, the influence of the sample size on the chemical and physical properties of copper plasma-implanted polyethylene is studied experimentally and theoretically using Poisson's equation and plasma sheath theory. Our results indicate that the sample size affects the implant depth profiles. For a large sample, more deposition occurs in the center region, whereas the implantation to deposition ratio shows less variation across the smaller sample. However, the Cu elemental chemical state is not affected by this variation. Our theoretical study discloses that nonuniform metal implantation mainly results from the laterally different surface potential on the insulating materials due to surface charge buildup and more effective charge transfer near the edge of the sample.

  17. How Many Is Enough? Effect of Sample Size in Inter-Subject Correlation Analysis of fMRI

    PubMed Central

    Pajula, Juha; Tohka, Jussi

    2016-01-01

    Inter-subject correlation (ISC) is a widely used method for analyzing functional magnetic resonance imaging (fMRI) data acquired during naturalistic stimuli. A challenge in ISC analysis is to define the required sample size in the way that the results are reliable. We studied the effect of the sample size on the reliability of ISC analysis and additionally addressed the following question: How many subjects are needed for the ISC statistics to converge to the ISC statistics obtained using a large sample? The study was realized using a large block design data set of 130 subjects. We performed a split-half resampling based analysis repeatedly sampling two nonoverlapping subsets of 10–65 subjects and comparing the ISC maps between the independent subject sets. Our findings suggested that with 20 subjects, on average, the ISC statistics had converged close to a large sample ISC statistic with 130 subjects. However, the split-half reliability of unthresholded and thresholded ISC maps improved notably when the number of subjects was increased from 20 to 30 or more. PMID:26884746

  18. Effects of Sample Size and Dimensionality on the Performance of Four Algorithms for Inference of Association Networks in Metabonomics.

    PubMed

    Suarez-Diez, Maria; Saccenti, Edoardo

    2015-12-01

    We investigated the effect of sample size and dimensionality on the performance of four algorithms (ARACNE, CLR, CORR, and PCLRC) when they are used for the inference of metabolite association networks. We report that as many as 100-400 samples may be necessary to obtain stable network estimations, depending on the algorithm and the number of measured metabolites. The CLR and PCLRC methods produce similar results, whereas network inference based on correlations provides sparse networks; we found ARACNE to be unsuitable for this application, being unable to recover the underlying metabolite association network. We recommend the PCLRC algorithm for the inference on metabolite association networks.

  19. Band-limited angular spectrum numerical propagation method with selective scaling of observation window size and sample number.

    PubMed

    Yu, Xiao; Xiahui, Tang; Yingxiong, Qin; Hao, Peng; Wei, Wang

    2012-11-01

    Band-limited angular spectrum (BLAS) methods can be used for simulating the diffractional propagation in the near field, the far field, the tilted system, and the nonparaxial system. However, it does not allow free sample interval on the output calculation window. In this paper, an improved BLAS method is proposed. This new algorithm permits a selective scaling of observation window size and sample number on the observation plane. The method is based on the linear convolution, which can be calculated by fast Fourier transform effectively.

  20. Sediment Grain-Size and Loss-on-Ignition Analyses from 2002 Englebright Lake Coring and Sampling Campaigns

    USGS Publications Warehouse

    Snyder, Noah P.; Allen, James R.; Dare, Carlin; Hampton, Margaret A.; Schneider, Gary; Wooley, Ryan J.; Alpers, Charles N.; Marvin-DiPasquale, Mark C.

    2004-01-01

    This report presents sedimentologic data from three 2002 sampling campaigns conducted in Englebright Lake on the Yuba River in northern California. This work was done to assess the properties of the material deposited in the reservoir between completion of Englebright Dam in 1940 and 2002, as part of the Upper Yuba River Studies Program. Included are the results of grain-size-distribution and loss-on-ignition analyses for 561 samples, as well as an error analysis based on replicate pairs of subsamples.

  1. An oxygen flow calorimeter for determining the heating value of kilogram size samples of municipal solid waste

    NASA Astrophysics Data System (ADS)

    Domalski, E. S.; Churney, K. L.; Ledford, A. E.; Ryan, R. V.; Reilly, M. L.

    1982-02-01

    A calorimeter to determine the enthalpies of combustion of kilogram size samples of minimally processed municipal solid municipal waste (MSW) in flowing oxygen near atmospheric pressure is discussed. The organic fraction of 25 gram pellets of highly processed MSW was burned in pure oxygen to CO2 and H2O in a small prototype calorimeter. The carbon content of the ash and the uncertainty in the amount of CO in the combustion products contribute calorimetric errors of 0.1 percent or less to the enthalpy of combustion. Large pellets of relatively unprocessed MSW have been successfully burned in a prototype kilogram size combustor at a rate of 15 minutes per kilogram with CO/CO2 ratios not greater than 0.1 percent. The design of the kilogram size calorimeter was completed and construction was begun.

  2. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    PubMed

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data. PMID:27410085

  3. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    PubMed

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  4. Power and sample size calculations for the Wilcoxon-Mann-Whitney test in the presence of death-censored observations.

    PubMed

    Matsouaka, Roland A; Betensky, Rebecca A

    2015-02-10

    We consider a clinical trial of a potentially lethal disease in which patients are randomly assigned to two treatment groups and are followed for a fixed period of time; a continuous endpoint is measured at the end of follow-up. For some patients; however, death (or severe disease progression) may preclude measurement of the endpoint. A statistical analysis that includes only patients with endpoint measurements may be biased. An alternative analysis includes all randomized patients, with rank scores assigned to the patients who are available for the endpoint measurement on the basis of the magnitude of their responses and with 'worst-rank' scores assigned to those patients whose death precluded the measurement of the continuous endpoint. The worst-rank scores are worse than all observed rank scores. The treatment effect is then evaluated using the Wilcoxon-Mann-Whitney test. In this paper, we derive closed-form formulae for the power and sample size of the Wilcoxon-Mann-Whitney test when missing measurements of the continuous endpoints because of death are replaced by worst-rank scores. We distinguish two approaches for assigning the worst-rank scores. In the tied worst-rank approach, all deaths are weighted equally, and the worst-rank scores are set to a single value that is worse than all measured responses. In the untied worst-rank approach, the worst-rank scores further rank patients according to their time of death, so that an earlier death is considered worse than a later death, which in turn is worse than all measured responses. In addition, we propose four methods for the implementation of the sample size formulae for a trial with expected early death. We conduct Monte Carlo simulation studies to evaluate the accuracy of our power and sample size formulae and to compare the four sample size estimation methods.

  5. Sample size requirements for in situ vegetation and substrate classifications in shallow, natural Nebraska Lakes

    USGS Publications Warehouse

    Paukert, C.P.; Willis, D.W.; Holland, R.S.

    2002-01-01

    We assessed the precision of visual estimates of vegetation and substrate along transects in 15 shallow, natural Nebraska lakes. Vegetation type (submergent or emergent), vegetation density (sparse, moderate, or dense), and substrate composition (percentage sand, muck, and clay; to the nearest 10%) were estimated at 25-70 sampling sites per lake by two independent observers. Observer agreement for vegetation type was 92%. Agreement ranged from 62.5% to 90.1% for substrate composition. Agreement was also high (72%) for vegetation density estimates. The relatively high agreement between estimates was likely attributable to the homogeneity of the lake habitats. Nearly 90% of the substrate sites were classified as 0% clay, and over 68% as either 0% or 100% sand. When habitats were homogeneous, less than 40 sampling sites per lake were required for 95% confidence that habitat composition was within 10% of the true mean, and over 100 sites were required when habitats were heterogeneous. Our results suggest that relatively high precision is attainable for vegetation and substrate mapping in shallow, natural lakes.

  6. A behavioral Bayes method to determine the sample size of a clinical trial considering efficacy and safety.

    PubMed

    Kikuchi, Takashi; Gittins, John

    2009-08-15

    It is necessary for the calculation of sample size to achieve the best balance between the cost of a clinical trial and the possible benefits from a new treatment. Gittins and Pezeshk developed an innovative (behavioral Bayes) approach, which assumes that the number of users is an increasing function of the difference in performance between the new treatment and the standard treatment. The better a new treatment, the more the number of patients who want to switch to it. The optimal sample size is calculated in this framework. This BeBay approach takes account of three decision-makers, a pharmaceutical company, the health authority and medical advisers. Kikuchi, Pezeshk and Gittins generalized this approach by introducing a logistic benefit function, and by extending to the more usual unpaired case, and with unknown variance. The expected net benefit in this model is based on the efficacy of the new drug but does not take account of the incidence of adverse reactions. The present paper extends the model to include the costs of treating adverse reactions and focuses on societal cost-effectiveness as the criterion for determining sample size. The main application is likely to be to phase III clinical trials, for which the primary outcome is to compare the costs and benefits of a new drug with a standard drug in relation to national health-care.

  7. Sampling, testing and modeling particle size distribution in urban catch basins.

    PubMed

    Garofalo, G; Carbone, M; Piro, P

    2014-01-01

    The study analyzed the particle size distribution of particulate matter (PM) retained in two catch basins located, respectively, near a parking lot and a traffic intersection with common high levels of traffic activity. Also, the treatment performance of a filter medium was evaluated by laboratory testing. The experimental treatment results and the field data were then used as inputs to a numerical model which described on a qualitative basis the hydrological response of the two catchments draining into each catch basin, respectively, and the quality of treatment provided by the filter during the measured rainfall. The results show that PM concentrations were on average around 300 mg/L (parking lot site) and 400 mg/L (road site) for the 10 rainfall-runoff events observed. PM with a particle diameter of <45 μm represented 40-50% of the total PM mass. The numerical model showed that a catch basin with a filter unit can remove 30 to 40% of the PM load depending on the storm characteristics. PMID:25500476

  8. Distribution of human waste samples in relation to sizing waste processing in space

    NASA Technical Reports Server (NTRS)

    Parker, Dick; Gallagher, S. K.

    1992-01-01

    Human waste processing for closed ecological life support systems (CELSS) in space requires that there be an accurate knowledge of the quantity of wastes produced. Because initial CELSS will be handling relatively few individuals, it is important to know the variation that exists in the production of wastes rather than relying upon mean values that could result in undersizing equipment for a specific crew. On the other hand, because of the costs of orbiting equipment, it is important to design the equipment with a minimum of excess capacity because of the weight that extra capacity represents. A considerable quantity of information that had been independently gathered on waste production was examined in order to obtain estimates of equipment sizing requirements for handling waste loads from crews of 2 to 20 individuals. The recommended design for a crew of 8 should hold 34.5 liters per day (4315 ml/person/day) for urine and stool water and a little more than 1.25 kg per day (154 g/person/day) of human waste solids and sanitary supplies.

  9. Free and combined amino acids in size-segregated atmospheric aerosol samples

    NASA Astrophysics Data System (ADS)

    Di Filippo, Patrizia; Pomata, Donatella; Riccardi, Carmela; Buiarelli, Francesca; Gallo, Valentina; Quaranta, Alessandro

    2014-12-01

    Concentrations of free and combined amino acids in an urban atmosphere and their distributions in size-segregated particles were investigated in the cold and warm seasons. In particular this article provides the first investigation of protein bioaerosol concentrations in ultrafine fraction (PM0.1) of particulate matter. In addition the present work provides amino acid and total proteinaceous material concentrations in NIST SRM 1649b, useful as reference values. The reference material was also used to build matrix matched calibration curves. Free amino acid total content in winter and summer PM0.1 was respectively 48.0 and 94.4 ng m-3, representing about 0.7 and 7.4% by weight of urban particulate matter in the two seasons. Total airborne protein and peptide concentrations in the same ultrafine fractions were 93.6 and 449.9 ng m-3 respectively in winter and in summer, representing 7.5 and 35.4% w/w of PM0.1, and demonstrating an exceptionally high percentage in summer ultrafine fraction. The significant potential adverse health effects of ultrafine particulate matter include allergies mainly caused by protein particles and we assumed that in summer 162 ng h-1 of proteinaceous material, by means of ultrafine particles, can penetrate from the lungs into the bloodstream.

  10. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    PubMed

    Fung, Tak; Keenan, Kevin

    2014-01-01

    The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management. PMID:24465792

  11. Improved ASTM G72 Test Method for Ensuring Adequate Fuel-to-Oxidizer Ratios

    NASA Technical Reports Server (NTRS)

    Juarez, Alfredo; Harper, Susana Tapia

    2016-01-01

    The ASTM G72/G72M-15 Standard Test Method for Autogenous Ignition Temperature of Liquids and Solids in a High-Pressure Oxygen-Enriched Environment is currently used to evaluate materials for the ignition susceptibility driven by exposure to external heat in an enriched oxygen environment. Testing performed on highly volatile liquids such as cleaning solvents has proven problematic due to inconsistent test results (non-ignitions). Non-ignition results can be misinterpreted as favorable oxygen compatibility, although they are more likely associated with inadequate fuel-to-oxidizer ratios. Forced evaporation during purging and inadequate sample size were identified as two potential causes for inadequate available sample material during testing. In an effort to maintain adequate fuel-to-oxidizer ratios within the reaction vessel during test, several parameters were considered, including sample size, pretest sample chilling, pretest purging, and test pressure. Tests on a variety of solvents exhibiting a range of volatilities are presented in this paper. A proposed improvement to the standard test protocol as a result of this evaluation is also presented. Execution of the final proposed improved test protocol outlines an incremental step method of determining optimal conditions using increased sample sizes while considering test system safety limits. The proposed improved test method increases confidence in results obtained by utilizing the ASTM G72 autogenous ignition temperature test method and can aid in the oxygen compatibility assessment of highly volatile liquids and other conditions that may lead to false non-ignition results.

  12. Wind tunnel study of twelve dust samples by large particle size

    NASA Astrophysics Data System (ADS)

    Shannak, B.; Corsmeier, U.; Kottmeier, Ch.; Al-azab, T.

    2014-12-01

    Due to the lack of data by large dust and sand particle, the fluid dynamics characteristics, hence the collection efficiencies of different twelve dust samplers have been experimentally investigated. Wind tunnel tests were carried out at wind velocities ranging from 1 up to 5.5 ms-1. As a large solid particle of 0.5 and 1 mm in diameter, Polystyrene pellets called STYRO Beads or polystyrene sphere were used instead of sand or dust. The results demonstrate that the collection efficiency is relatively acceptable only of eight tested sampler and lie between 60 and 80% depending on the wind velocity and particle size. These samplers are: the Cox Sand Catcher (CSC), the British Standard Directional Dust Gauge (BSD), the Big Spring Number Eight (BSNE), the Suspended Sediment Trap (SUSTRA), the Modified Wilson and Cooke (MWAC), the Wedge Dust Flux Gauge (WDFG), the Model Series Number 680 (SIERRA) and the Pollet Catcher (POLCA). Generally they can be slightly recommended as suitable dust samplers but with collecting error of 20 up to 40%. However the BSNE verify the best performance with a catching error of about 20% and can be with caution selected as a suitable dust sampler. Quite the contrary, the other four tested samplers which are the Marble Dust Collector (MDCO), the United States Geological Survey (USGS), the Inverted Frisbee Sampler (IFS) and the Inverted Frisbee Shaped Collecting Bowl (IFSCB) cannot be recommended due to their very low collection efficiency of 5 up to 40%. In total the efficiency of sampler may be below 0.5, depending on the frictional losses (caused by the sampler geometry) in the fluid and the particle's motion, and on the intensity of airflow acceleration near the sampler inlet. Therefore, the literature data of dust are defective and insufficient. To avoid false collecting data and hence inaccurate mass flux modeling, the geometry of the dust sampler should be considered and furthermore improved.

  13. Effects of dislocation density and sample-size on plastic yielding at the nanoscale: a Weibull-like framework.

    PubMed

    Rinaldi, Antonio

    2011-11-01

    Micro-compression tests have demonstrated that plastic yielding in nanoscale pillars is the result of the fine interplay between the sample-size (chiefly the diameter D) and the density of bulk dislocations ρ. The power-law scaling typical of the nanoscale stems from a source-limited regime, which depends on both these sample parameters. Based on the experimental and theoretical results available in the literature, this paper offers a perspective about the joint effect of D and ρ on the yield stress in any plastic regime, promoting also a schematic graphical map of it. In the sample-size dependent regime, such dependence is cast mathematically into a first order Weibull-type theory, where the power-law scaling the power exponent β and the modulus m of an approximate (unimodal) Weibull distribution of source-strengths can be related by a simple inverse proportionality. As a corollary, the scaling exponent β may not be a universal number, as speculated in the literature. In this context, the discussion opens the alternative possibility of more general (multimodal) source-strength distributions, which could produce more complex and realistic strengthening patterns than the single power-law usually assumed. The paper re-examines our own experimental data, as well as results of Bei et al. (2008) on Mo-alloy pillars, especially for the sake of emphasizing the significance of a sudden increase in sample response scatter as a warning signal of an incipient source-limited regime.

  14. Effects of dislocation density and sample-size on plastic yielding at the nanoscale: a Weibull-like framework

    NASA Astrophysics Data System (ADS)

    Rinaldi, Antonio

    2011-11-01

    Micro-compression tests have demonstrated that plastic yielding in nanoscale pillars is the result of the fine interplay between the sample-size (chiefly the diameter D) and the density of bulk dislocations ρ. The power-law scaling typical of the nanoscale stems from a source-limited regime, which depends on both these sample parameters. Based on the experimental and theoretical results available in the literature, this paper offers a perspective about the joint effect of D and ρ on the yield stress in any plastic regime, promoting also a schematic graphical map of it. In the sample-size dependent regime, such dependence is cast mathematically into a first order Weibull-type theory, where the power-law scaling the power exponent β and the modulus m of an approximate (unimodal) Weibull distribution of source-strengths can be related by a simple inverse proportionality. As a corollary, the scaling exponent β may not be a universal number, as speculated in the literature. In this context, the discussion opens the alternative possibility of more general (multimodal) source-strength distributions, which could produce more complex and realistic strengthening patterns than the single power-law usually assumed. The paper re-examines our own experimental data, as well as results of Bei et al. (2008) on Mo-alloy pillars, especially for the sake of emphasizing the significance of a sudden increase in sample response scatter as a warning signal of an incipient source-limited regime.

  15. Comparing the Pearson and Spearman correlation coefficients across distributions and sample sizes: A tutorial using simulations and empirical data.

    PubMed

    de Winter, Joost C F; Gosling, Samuel D; Potter, Jeff

    2016-09-01

    The Pearson product-moment correlation coefficient () and the Spearman rank correlation coefficient () are widely used in psychological research. We compare and on 3 criteria: variability, bias with respect to the population value, and robustness to an outlier. Using simulations across low (N = 5) to high (N = 1,000) sample sizes we show that, for normally distributed variables, and have similar expected values but is more variable, especially when the correlation is strong. However, when the variables have high kurtosis, is more variable than . Next, we conducted a sampling study of a psychometric dataset featuring symmetrically distributed data with light tails, and of 2 Likert-type survey datasets, 1 with light-tailed and the other with heavy-tailed distributions. Consistent with the simulations, had lower variability than in the psychometric dataset. In the survey datasets with heavy-tailed variables in particular, had lower variability than , and often corresponded more accurately to the population Pearson correlation coefficient () than did. The simulations and the sampling studies showed that variability in terms of standard deviations can be reduced by about 20% by choosing instead of . In comparison, increasing the sample size by a factor of 2 results in a 41% reduction of the standard deviations of and . In conclusion, is suitable for light-tailed distributions, whereas is preferable when variables feature heavy-tailed distributions or when outliers are present, as is often the case in psychological research. (PsycINFO Database Record

  16. The effects of different syringe volume, needle size and sample volume on blood gas analysis in syringes washed with heparin

    PubMed Central

    Küme, Tuncay; Şişman, Ali Rıza; Solak, Ahmet; Tuğlu, Birsen; Çinkooğlu, Burcu; Çoker, Canan

    2012-01-01

    Introductıon: We evaluated the effect of different syringe volume, needle size and sample volume on blood gas analysis in syringes washed with heparin. Materials and methods: In this multi-step experimental study, percent dilution ratios (PDRs) and final heparin concentrations (FHCs) were calculated by gravimetric method for determining the effect of syringe volume (1, 2, 5 and 10 mL), needle size (20, 21, 22, 25 and 26 G) and sample volume (0.5, 1, 2, 5 and 10 mL). The effect of different PDRs and FHCs on blood gas and electrolyte parameters were determined. The erroneous results from nonstandardized sampling were evaluated according to RiliBAK’s TEa. Results: The increase of PDRs and FHCs was associated with the decrease of syringe volume, the increase of needle size and the decrease of sample volume: from 2.0% and 100 IU/mL in 10 mL-syringe to 7.0% and 351 IU/mL in 1 mL-syringe; from 4.9% and 245 IU/mL in 26G to 7.6% and 380 IU/mL in 20 G with combined 1 mL syringe; from 2.0% and 100 IU/mL in full-filled sample to 34% and 1675 IU/mL in 0.5 mL suctioned sample into 10 mL-syringe. There was no statistical difference in pH; but the percent decreasing in pCO2, K+, iCa2+, iMg2+; the percent increasing in pO2 and Na+ were statistical significance compared to samples full-filled in syringes. The all changes in pH and pO2 were acceptable; but the changes in pCO2, Na+, K+ and iCa2+ were unacceptable according to TEa limits except fullfilled-syringes. Conclusions: The changes in PDRs and FHCs due nonstandardized sampling in syringe washed with liquid heparin give rise to erroneous test results for pCO2 and electrolytes. PMID:22838185

  17. Effect of temperature, sample size and gas flow rate on drying of Beulah-Zap lignite and Wyodak subbituminous coal

    SciTech Connect

    Vorres, K.S.

    1993-01-01

    Beulah-Zap lignite and Wyodak-Anderson ([minus]100 and [minus]20 mesh from the Argonne Premium Coal Sample Program) were dried in nitrogen under various conditions of temperature (20--80[degree]C), gas flow rates (20--160 cc/min), and sample sizes (20--160 mg). An equation relating the initial drying rate in the unimolecular mechanism was developed to relate the drying rate and these three variables over the initial 80--85% of the moisture loss for the lignite. The behavior of the Wyodak-Anderson subbituminous coal is very similar to that of the lignite. The nitrogen BET surface area of the subbituminous sample is much larger than the lignite.

  18. Effect of temperature, sample size and gas flow rate on drying of Beulah-Zap lignite and Wyodak subbituminous coal

    SciTech Connect

    Vorres, K.S.

    1993-03-01

    Beulah-Zap lignite and Wyodak-Anderson ({minus}100 and {minus}20 mesh from the Argonne Premium Coal Sample Program) were dried in nitrogen under various conditions of temperature (20--80{degree}C), gas flow rates (20--160 cc/min), and sample sizes (20--160 mg). An equation relating the initial drying rate in the unimolecular mechanism was developed to relate the drying rate and these three variables over the initial 80--85% of the moisture loss for the lignite. The behavior of the Wyodak-Anderson subbituminous coal is very similar to that of the lignite. The nitrogen BET surface area of the subbituminous sample is much larger than the lignite.

  19. Small population size of Pribilof Rock Sandpipers confirmed through distance-sampling surveys in Alaska

    USGS Publications Warehouse

    Ruthrauff, Daniel R.; Tibbitts, T. Lee; Gill, Robert E.; Dementyev, Maksim N.; Handel, Colleen M.

    2012-01-01

    The Rock Sandpiper (Calidris ptilocnemis) is endemic to the Bering Sea region and unique among shorebirds in the North Pacific for wintering at high latitudes. The nominate subspecies, the Pribilof Rock Sandpiper (C. p. ptilocnemis), breeds on four isolated islands in the Bering Sea and appears to spend the winter primarily in Cook Inlet, Alaska. We used a stratified systematic sampling design and line-transect method to survey the entire breeding range of this population during springs 2001-2003. Densities were up to four times higher on the uninhabited and more northerly St. Matthew and Hall islands than on St. Paul and St. George islands, which both have small human settlements and introduced reindeer herds. Differences in density, however, appeared to be more related to differences in vegetation than to anthropogenic factors, raising some concern for prospective effects of climate change. We estimated the total population at 19 832 birds (95% CI 17 853–21 930), ranking it among the smallest of North American shorebird populations. To determine the vulnerability of C. p. ptilocnemis to anthropogenic and stochastic environmental threats, future studies should focus on determining the amount of gene flow among island subpopulations, the full extent of the subspecies' winter range, and the current trajectory of this small population.

  20. [Preconcentration of Trace Cu(II) in Water Samples with Nano-Sized ZnO and Determination by GFAAS].

    PubMed

    Huang, Si-si; Zhang, Xu; Qian, Sha-hua

    2015-09-01

    The content of copper in natural water is very low, and direct determination is difficult. Therefore, it is very meaningful for the combination of efficient separation-enrichment technology and highly sensitive detection. Based on the high adsorption capacity of Cu(II) onto nano-sized ZnO, a novel method by using nano-sized ZnO as adsorbent and graphite furnace atomic absorption spectrometry as determination means was in this work. The adsorption behaviors of Cu(II) on nano-sized ZnO was studied. Effects of acidity, adsorption equilibrium time, adsorbent dosage and coexisting ions on adsorption rates were investigated. The results showed that the adsorption efficiency was above 95% in a pH range from 3.0 to 7.0. Compared with other adsorbents for trace element enrichment such as activated carbon, nano-sized TiO2 powder, the most prominent advantage is nano-sized ZnO precipitate with the concentrated element can directly dissolved in HCl solution without any filtration and desorption process can directly analyzed by graphite furnace atomic absorption spectrometry or inductively coupled plasma atomic emission spectrometry. Compared with colloid nano materials, nano-sized ZnO is the true solution after dissolving have small matrix effect and viscosity more suitable for graphite furnace atomic absorption spectrometry or inductively coupled plasma atomic emission spectrometry detection. The proposed method possesses low detection limit (0.13 μg · L(-1)) and good precision (RSD=2.2%). The recoveries for the analysis of environmental samples were in a rang of 91.6%~92.6% and the analysis results of certified materials were compellent by using the proposed method.

  1. Organic composition of size segregated atmospheric particulate matter, during summer and winter sampling campaigns at representative sites in Madrid, Spain

    NASA Astrophysics Data System (ADS)

    Mirante, Fátima; Alves, Célia; Pio, Casimiro; Pindado, Oscar; Perez, Rosa; Revuelta, M.a. Aranzazu; Artiñano, Begoña

    2013-10-01

    Madrid, the largest city of Spain, has some and unique air pollution problems, such as emissions from residential coal burning, a huge vehicle fleet and frequent African dust outbreaks, along with the lack of industrial emissions. The chemical composition of particulate matter (PM) was studied during summer and winter sampling campaigns, conducted in order to obtain size-segregated information at two different urban sites (roadside and urban background). PM was sampled with high volume cascade impactors, with 4 stages: 10-2.5, 2.5-1, 1-0.5 and < 0.5 μm. Samples were solvent extracted and organic compounds were identified and quantified by GC-MS. Alkanes, polycyclic aromatic hydrocarbons (PAHs), alcohols and fatty acids were chromatographically resolved. The PM1-2.5 was the fraction with the highest mass percentage of organics. Acids were the organic compounds that dominated all particle size fractions. Different organic compounds presented apparently different seasonal characteristics, reflecting distinct emission sources, such as vehicle exhausts and biogenic sources. The benzo[a]pyrene equivalent concentrations were lower than 1 ng m- 3. The estimated carcinogenic risk is low.

  2. Ewens' sampling formula and related formulae: combinatorial proofs, extensions to variable population size and applications to ages of alleles.

    PubMed

    Griffiths, Robert C; Lessard, Sabin

    2005-11-01

    Ewens' sampling formula, the probability distribution of a configuration of alleles in a sample of genes under the infinitely-many-alleles model of mutation, is proved by a direct combinatorial argument. The distribution is extended to a model where the population size may vary back in time. The distribution of age-ordered frequencies in the population is also derived in the model, extending the GEM distribution of age-ordered frequencies in a model with a constant-sized population. The genealogy of a rare allele is studied using a combinatorial approach. A connection is explored between the distribution of age-ordered frequencies and ladder indices and heights in a sequence of random variables. In a sample of n genes the connection is with ladder heights and indices in a sequence of draws from an urn containing balls labelled 1,2,...,n; and in the population the connection is with ladder heights and indices in a sequence of independent uniform random variables.

  3. Ozonolysis pretreatment of maize stover: the interactive effect of sample particle size and moisture on ozonolysis process.

    PubMed

    Li, Cheng; Wang, Li; Chen, Zhengxing; Li, Yongfu; Wang, Ren; Luo, Xiaohu; Cai, Guolin; Li, Yanan; Yu, Qiusheng; Lu, Jian

    2015-05-01

    Maize stover was ozonolyzed to improve the enzymatic digestibility. The interactive effect of sample particle size and moisture content on ozonolysis was studied. After ozonolysis, both lignin and xylan decreased while cellulose was only slightly affected in all experiments. It was also found that the smaller particle size is better for ozonolysis. The similar water activity of the different optimum moisture contents for ozonolysis reveals that the free and bound water ratio is a key factor of ozonolysis. The best result of ozonolysis was obtained at the mesh of -300 and the moisture of 60%, where up to 75% lignin was removed. The glucose yield after enzymatic hydrolysis increased from 18.5% to 80%. Water washing had low impact on glucose yield (less than 10% increases), but significantly reduced xylose yield (up to 42% decreases). The result indicates that ozonolysis leads to xylan solubilization. PMID:25746300

  4. Size-separated sampling and analysis of isocyanates in workplace aerosols. Part I. Denuder--cascade impactor sampler.

    PubMed

    Dahlin, Jakob; Spanne, Mårten; Karlsson, Daniel; Dalene, Marianne; Skarping, Gunnar

    2008-07-01

    Isocyanates in the workplace atmosphere are typically present both in gas and particle phase. The health effects of exposure to isocyanates in gas phase and different particle size fractions are likely to be different due to their ability to reach different parts in the respiratory system. To reveal more details regarding the exposure to isocyanate aerosols, a denuder-impactor (DI) sampler for airborne isocyanates was designed. The sampler consists of a channel-plate denuder for collection of gaseous isocyanates, in series with three-cascade impactor stages with cut-off diameters (d(50)) of 2.5, 1.0 and 0.5 mum. An end filter was connected in series after the impactor for collection of particles smaller than 0.5 mum. The denuder, impactor plates and the end filter were impregnated with a mixture of di-n-butylamine (DBA) and acetic acid for derivatization of the isocyanates. During sampling, the reagent on the impactor plates and the end filter is continuously refreshed, due to the DBA release from the impregnated denuder plates. This secures efficient derivatization of all isocyanate particles. The airflow through the sampler was 5 l min(-1). After sampling, the samples containing the different size fractions were analyzed using liquid chromatography-mass spectrometry (LC-MS)/MS. The DBA impregnation was stable in the sampler for at least 1 week. After sampling, the DBA derivatives were stable for at least 3 weeks. Air sampling was performed in a test chamber (300 l). Isocyanate aerosols studied were thermal degradation products of different polyurethane polymers, spraying of isocyanate coating compounds and pure gas-phase isocyanates. Sampling with impinger flasks, containing DBA in toluene, with a glass fiber filter in series was used as a reference method. The DI sampler showed good compliance with the reference method, regarding total air levels. For the different aerosols studied, vast differences were revealed in the distribution of isocyanate in gas and

  5. Impact of cloud horizontal inhomogeneity and directional sampling on the retrieval of cloud droplet size by the POLDER instrument

    NASA Astrophysics Data System (ADS)

    Shang, H.; Chen, L.; Bréon, F. M.; Letu, H.; Li, S.; Wang, Z.; Su, L.

    2015-11-01

    The principles of cloud droplet size retrieval via Polarization and Directionality of the Earth's Reflectance (POLDER) requires that clouds be horizontally homogeneous. The retrieval is performed by combining all measurements from an area of 150 km × 150 km to compensate for POLDER's insufficient directional sampling. Using POLDER-like data simulated with the RT3 model, we investigate the impact of cloud horizontal inhomogeneity and directional sampling on the retrieval and analyze which spatial resolution is potentially accessible from the measurements. Case studies show that the sub-grid-scale variability in droplet effective radius (CDR) can significantly reduce valid retrievals and introduce small biases to the CDR (~ 1.5 μm) and effective variance (EV) estimates. Nevertheless, the sub-grid-scale variations in EV and cloud optical thickness (COT) only influence the EV retrievals and not the CDR estimate. In the directional sampling cases studied, the retrieval using limited observations is accurate and is largely free of random noise. Several improvements have been made to the original POLDER droplet size retrieval. For example, measurements in the primary rainbow region (137-145°) are used to ensure retrievals of large droplet (> 15 μm) and to reduce the uncertainties caused by cloud heterogeneity. We apply the improved method using the POLDER global L1B data from June 2008, and the new CDR results are compared with the operational CDRs. The comparison shows that the operational CDRs tend to be underestimated for large droplets because the cloudbow oscillations in the scattering angle region of 145-165° are weak for cloud fields with CDR > 15 μm. Finally, a sub-grid-scale retrieval case demonstrates that a higher resolution, e.g., 42 km × 42 km, can be used when inverting cloud droplet size distribution parameters from POLDER measurements.

  6. Does feature selection improve classification accuracy? Impact of sample size and feature selection on classification using anatomical magnetic resonance images.

    PubMed

    Chu, Carlton; Hsu, Ai-Ling; Chou, Kun-Hsien; Bandettini, Peter; Lin, Chingpo

    2012-03-01

    There are growing numbers of studies using machine learning approaches to characterize patterns of anatomical difference discernible from neuroimaging data. The high-dimensionality of image data often raises a concern that feature selection is needed to obtain optimal accuracy. Among previous studies, mostly using fixed sample sizes, some show greater predictive accuracies with feature selection, whereas others do not. In this study, we compared four common feature selection methods. 1) Pre-selected region of interests (ROIs) that are based on prior knowledge. 2) Univariate t-test filtering. 3) Recursive feature elimination (RFE), and 4) t-test filtering constrained by ROIs. The predictive accuracies achieved from different sample sizes, with and without feature selection, were compared statistically. To demonstrate the effect, we used grey matter segmented from the T1-weighted anatomical scans collected by the Alzheimer's disease Neuroimaging Initiative (ADNI) as the input features to a linear support vector machine classifier. The objective was to characterize the patterns of difference between Alzheimer's disease (AD) patients and cognitively normal subjects, and also to characterize the difference between mild cognitive impairment (MCI) patients and normal subjects. In addition, we also compared the classification accuracies between MCI patients who converted to AD and MCI patients who did not convert within the period of 12 months. Predictive accuracies from two data-driven feature selection methods (t-test filtering and RFE) were no better than those achieved using whole brain data. We showed that we could achieve the most accurate characterizations by using prior knowledge of where to expect neurodegeneration (hippocampus and parahippocampal gyrus). Therefore, feature selection does improve the classification accuracies, but it depends on the method adopted. In general, larger sample sizes yielded higher accuracies with less advantage obtained by using

  7. A regression-based differential expression detection algorithm for microarray studies with ultra-low sample size.

    PubMed

    Vasiliu, Daniel; Clamons, Samuel; McDonough, Molly; Rabe, Brian; Saha, Margaret

    2015-01-01

    Global gene expression analysis using microarrays and, more recently, RNA-seq, has allowed investigators to understand biological processes at a system level. However, the identification of differentially expressed genes in experiments with small sample size, high dimensionality, and high variance remains challenging, limiting the usability of these tens of thousands of publicly available, and possibly many more unpublished, gene expression datasets. We propose a novel variable selection algorithm for ultra-low-n microarray studies using generalized linear model-based variable selection with a penalized binomial regression algorithm called penalized Euclidean distance (PED). Our method uses PED to build a classifier on the experimental data to rank genes by importance. In place of cross-validation, which is required by most similar methods but not reliable for experiments with small sample size, we use a simulation-based approach to additively build a list of differentially expressed genes from the rank-ordered list. Our simulation-based approach maintains a low false discovery rate while maximizing the number of differentially expressed genes identified, a feature critical for downstream pathway analysis. We apply our method to microarray data from an experiment perturbing the Notch signaling pathway in Xenopus laevis embryos. This dataset was chosen because it showed very little differential expression according to limma, a powerful and widely-used method for microarray analysis. Our method was able to detect a significant number of differentially expressed genes in this dataset and suggest future directions for investigation. Our method is easily adaptable for analysis of data from RNA-seq and other global expression experiments with low sample size and high dimensionality.

  8. A log-linear model approach to estimation of population size using the line-transect sampling method

    USGS Publications Warehouse

    Anderson, D.R.; Burnham, K.P.; Crain, B.R.

    1978-01-01

    The technique of estimating wildlife population size and density using the belt or line-transect sampling method has been used in many past projects, such as the estimation of density of waterfowl nestling sites in marshes, and is being used currently in such areas as the assessment of Pacific porpoise stocks in regions of tuna fishing activity. A mathematical framework for line-transect methodology has only emerged in the last 5 yr. In the present article, we extend this mathematical framework to a line-transect estimator based upon a log-linear model approach.

  9. Hierarchical distance-sampling models to estimate population size and habitat-specific abundance of an island endemic

    USGS Publications Warehouse

    Sillett, Scott T.; Chandler, Richard B.; Royle, J. Andrew; Kéry, Marc; Morrison, Scott A.

    2012-01-01

    Population size and habitat-specific abundance estimates are essential for conservation management. A major impediment to obtaining such estimates is that few statistical models are able to simultaneously account for both spatial variation in abundance and heterogeneity in detection probability, and still be amenable to large-scale applications. The hierarchical distance-sampling model of J. A. Royle, D. K. Dawson, and S. Bates provides a practical solution. Here, we extend this model to estimate habitat-specific abundance and rangewide population size of a bird species of management concern, the Island Scrub-Jay (Aphelocoma insularis), which occurs solely on Santa Cruz Island, California, USA. We surveyed 307 randomly selected, 300 m diameter, point locations throughout the 250-km2 island during October 2008 and April 2009. Population size was estimated to be 2267 (95% CI 1613-3007) and 1705 (1212-2369) during the fall and spring respectively, considerably lower than a previously published but statistically problematic estimate of 12 500. This large discrepancy emphasizes the importance of proper survey design and analysis for obtaining reliable information for management decisions. Jays were most abundant in low-elevation chaparral habitat; the detection function depended primarily on the percent cover of chaparral and forest within count circles. Vegetation change on the island has been dramatic in recent decades, due to release from herbivory following the eradication of feral sheep (Ovis aries) from the majority of the island in the mid-1980s. We applied best-fit fall and spring models of habitat-specific jay abundance to a vegetation map from 1985, and estimated the population size of A. insularis was 1400-1500 at that time. The 20-30% increase in the jay population suggests that the species has benefited from the recovery of native vegetation since sheep removal. Nevertheless, this jay's tiny range and small population size make it vulnerable to natural

  10. Hierarchical distance-sampling models to estimate population size and habitat-specific abundance of an island endemic.

    PubMed

    Sillett, T Scott; Chandler, Richard B; Royle, J Andrew; Kery, Marc; Morrison, Scott A

    2012-10-01

    Population size and habitat-specific abundance estimates are essential for conservation management. A major impediment to obtaining such estimates is that few statistical models are able to simultaneously account for both spatial variation in abundance and heterogeneity in detection probability, and still be amenable to large-scale applications. The hierarchical distance-sampling model of J. A. Royle, D. K. Dawson, and S. Bates provides a practical solution. Here, we extend this model to estimate habitat-specific abundance and rangewide population size of a bird species of management concern, the Island Scrub-Jay (Aphelocoma insularis), which occurs solely on Santa Cruz Island, California, USA. We surveyed 307 randomly selected, 300 m diameter, point locations throughout the 250-km2 island during October 2008 and April 2009. Population size was estimated to be 2267 (95% CI 1613-3007) and 1705 (1212-2369) during the fall and spring respectively, considerably lower than a previously published but statistically problematic estimate of 12 500. This large discrepancy emphasizes the importance of proper survey design and analysis for obtaining reliable information for management decisions. Jays were most abundant in low-elevation chaparral habitat; the detection function depended primarily on the percent cover of chaparral and forest within count circles. Vegetation change on the island has been dramatic in recent decades, due to release from herbivory following the eradication of feral sheep (Ovis aries) from the majority of the island in the mid-1980s. We applied best-fit fall and spring models of habitat-specific jay abundance to a vegetation map from 1985, and estimated the population size of A. insularis was 1400-1500 at that time. The 20-30% increase in the jay population suggests that the species has benefited from the recovery of native vegetation since sheep removal. Nevertheless, this jay's tiny range and small population size make it vulnerable to natural

  11. Bioelement effects on thyroid gland in children living in iodine-adequate territory.

    PubMed

    Gorbachev, Anatoly L; Skalny, Anatoly V; Koubassov, Roman V

    2007-01-01

    Endemic goitre is a primary pathology of thyroid gland and critical medico social problem in many countries. A dominant cause of endemic goitre is iodine deficiency. However, besides primary iodine deficiency, the goitre may probably develop due to effects of other bioelement imbalances, essential to thyroid function maintenance. Here we studied 44 cases of endemic goitre in prepubertal children (7-10 y.o.) living in iodine-adequate territory. Thyroid volume was estimated by ultrasonometry. Main bioelements (Al, Ca, Cd, Co, Cr, Cu, Fe, Hg, I, Mg, Mn, Pb, Se, Si, Zn) were determined in hair samples by ICP-OES/ICP-MS method. Relationships between hair content of bioelements and thyroid gland size were estimated by multiple regressions. The regression model revealed significant positive relations between thyroid volume and Cr, Si, Mn contents. However, the actual factor of thyroid gland increase was only Si excess in organism. Significant negative relations of thyroid volume were revealed with I, Mg, Zn, Se, Co and Cd. In spite of this, the actual factors of thyroid gland volume increasing were I, Co, Mg and Se deficiency. Total bioelement contribution in thyroid impairment was estimated as 24%. Thus, it was suggested that endemic goitre in iodine-adequate territory can be formed by bioelement imbalances, namely Si excess and Co, Mg, Se shortage as well as endogenous I deficiency in spite of iodine-adequate environment.

  12. Intercomparison of elemental concentrations in total and size-fractionated aerosol samples collected during the mace head experiment, April 1991

    NASA Astrophysics Data System (ADS)

    François, Filip; Maenhaut, Willy; Colin, Jean-Louis; Losno, Remi; Schulz, Michael; Stahlschmidt, Thomas; Spokes, Lucinda; Jickells, Timothy

    During an intercomparison field experiment, organized at the Atlantic coast station of Mace Head, Ireland, in April 1991, aerosol samples were collected by four research groups. A variety of samplers was used, combining both high- and low-volume devices, with different types of collection substrates: Hi-Vol Whatman 41 filter holders, single Nuclepore filters and stacked filter units, as well as PIXE cascade impactors. The samples were analyzed by each participating group, using in-house analytical techniques and procedures. The intercomparison of the daily concentrations for 15 elements, measured by two or more participants, revealed a good agreement for the low-volume samplers for the majority of the elements, but also indicated some specific analytical problems, owing to the very low concentrations of the non-sea-salt elements at the sampling site. With the Hi-Vol Whatman 41 filter sampler, on the other hand, much higher results were obtained in particular for the sea-salt and crustal elements. The discrepancy was dependent upon the wind speed and was attributed to a higher collection efficiency of the Hi-Vol sampler for the very coarse particles, as compared to the low-volume devices under high wind speed conditions. The elemental mass size distribution, as derived from parallel cascade impactor samplings by two groups, showed discrepancies in the submicrometer aerosol fraction, which were tentatively attributed to differences in stage cut-off diameters and/or to bounce-off or splintering effects on the quartz impactor slides used by one of the groups. However, the atmospheric concentrations (sums over all stages) were rather similar in the parallel impactor samples and were only slightly lower than those derived from stacked filter unit samples taken in parallel.

  13. Sample Size and Repeated Measures Required in Studies of Foods in the Homes of African-American Families123

    PubMed Central

    Stevens, June; Bryant, Maria; Wang, Chin-Hua; Cai, Jianwen; Bentley, Margaret E.

    2012-01-01

    Measurement of the home food environment is of interest to researchers because it affects food intake and is a feasible target for nutrition interventions. The objective of this study was to provide estimates to aid the calculation of sample size and number of repeated measures needed in studies of nutrients and foods in the home. We inventoried all foods in the homes of 80 African-American first-time mothers and determined 6 nutrient-related attributes. Sixty-three households were measured 3 times, 11 were measured twice, and 6 were measured once, producing 217 inventories collected at ~2-mo intervals. Following log transformations, number of foods, total energy, dietary fiber, and fat required only one measurement per household to achieve a correlation of 0.8 between the observed and true values. For percent energy from fat and energy density, 3 and 2 repeated measurements, respectively, were needed to achieve a correlation of 0.8. A sample size of 252 was needed to detect a difference of 25% of an SD in total energy with one measurement compared with 213 with 3 repeated measurements. Macronutrient characteristics of household foods appeared relatively stable over a 6-mo period and only 1 or 2 repeated measures of households may be sufficient for an efficient study design. PMID:22535753

  14. Sample size and repeated measures required in studies of foods in the homes of African-American families.

    PubMed

    Stevens, June; Bryant, Maria; Wang, Chin-Hua; Cai, Jianwen; Bentley, Margaret E

    2012-06-01

    Measurement of the home food environment is of interest to researchers because it affects food intake and is a feasible target for nutrition interventions. The objective of this study was to provide estimates to aid the calculation of sample size and number of repeated measures needed in studies of nutrients and foods in the home. We inventoried all foods in the homes of 80 African-American first-time mothers and determined 6 nutrient-related attributes. Sixty-three households were measured 3 times, 11 were measured twice, and 6 were measured once, producing 217 inventories collected at ~2-mo intervals. Following log transformations, number of foods, total energy, dietary fiber, and fat required only one measurement per household to achieve a correlation of 0.8 between the observed and true values. For percent energy from fat and energy density, 3 and 2 repeated measurements, respectively, were needed to achieve a correlation of 0.8. A sample size of 252 was needed to detect a difference of 25% of an SD in total energy with one measurement compared with 213 with 3 repeated measurements. Macronutrient characteristics of household foods appeared relatively stable over a 6-mo period and only 1 or 2 repeated measures of households may be sufficient for an efficient study design.

  15. On the sample size requirement in genetic association tests when the proportion of false positives is controlled.

    PubMed

    Zou, Guohua; Zuo, Yijun

    2006-01-01

    With respect to the multiple-tests problem, recently an increasing amount of attention has been paid to control the false discovery rate (FDR), the positive false discovery rate (pFDR), and the proportion of false positives (PFP). The new approaches are generally believed to be more powerful than the classical Bonferroni one. This article focuses on the PFP approach. It demonstrates via examples in genetic association studies that the Bonferroni procedure can be more powerful than the PFP-control one and also shows the intrinsic connection between controlling the PFP and controlling the overall type I error rate. Since controlling the PFP does not necessarily lead to a desired power level, this article addresses the design issue and recommends the sample sizes that can attain the desired power levels when the PFP is controlled. The results in this article also provide rough guidance for the sample sizes to achieve the desired power levels when the FDR and especially the pFDR are controlled.

  16. Increasing sample size in prospective birth cohorts: back-extrapolating prenatal levels of persistent organic pollutants in newly enrolled children.

    PubMed

    Verner, Marc-André; Gaspar, Fraser W; Chevrier, Jonathan; Gunier, Robert B; Sjödin, Andreas; Bradman, Asa; Eskenazi, Brenda

    2015-03-17

    Study sample size in prospective birth cohorts of prenatal exposure to persistent organic pollutants (POPs) is limited by costs and logistics of follow-up. Increasing sample size at the time of health assessment would be beneficial if predictive tools could reliably back-extrapolate prenatal levels in newly enrolled children. We evaluated the performance of three approaches to back-extrapolate prenatal levels of p,p'-dichlorodiphenyltrichloroethane (DDT), p,p'-dichlorodiphenyldichloroethylene (DDE) and four polybrominated diphenyl ether (PBDE) congeners from maternal and/or child levels 9 years after delivery: a pharmacokinetic model and predictive models using deletion/substitution/addition or Super Learner algorithms. Model performance was assessed using the root mean squared error (RMSE), R2, and slope and intercept of the back-extrapolated versus measured levels. Super Learner outperformed the other approaches with RMSEs of 0.10 to 0.31, R2s of 0.58 to 0.97, slopes of 0.42 to 0.93 and intercepts of 0.08 to 0.60. Typically, models performed better for p,p'-DDT/E than PBDE congeners. The pharmacokinetic model performed well when back-extrapolating prenatal levels from maternal levels for compounds with longer half-lives like p,p'-DDE and BDE-153. Results demonstrate the ability to reliably back-extrapolate prenatal POP levels from levels 9 years after delivery, with Super Learner performing best based on our fit criteria. PMID:25698216

  17. Sample size and repeated measures required in studies of foods in the homes of African-American families.

    PubMed

    Stevens, June; Bryant, Maria; Wang, Chin-Hua; Cai, Jianwen; Bentley, Margaret E

    2012-06-01

    Measurement of the home food environment is of interest to researchers because it affects food intake and is a feasible target for nutrition interventions. The objective of this study was to provide estimates to aid the calculation of sample size and number of repeated measures needed in studies of nutrients and foods in the home. We inventoried all foods in the homes of 80 African-American first-time mothers and determined 6 nutrient-related attributes. Sixty-three households were measured 3 times, 11 were measured twice, and 6 were measured once, producing 217 inventories collected at ~2-mo intervals. Following log transformations, number of foods, total energy, dietary fiber, and fat required only one measurement per household to achieve a correlation of 0.8 between the observed and true values. For percent energy from fat and energy density, 3 and 2 repeated measurements, respectively, were needed to achieve a correlation of 0.8. A sample size of 252 was needed to detect a difference of 25% of an SD in total energy with one measurement compared with 213 with 3 repeated measurements. Macronutrient characteristics of household foods appeared relatively stable over a 6-mo period and only 1 or 2 repeated measures of households may be sufficient for an efficient study design. PMID:22535753

  18. MSurvPow: a FORTRAN program to calculate the sample size and power for cluster-randomized clinical trials with survival outcomes.

    PubMed

    Gao, Feng; Manatunga, Amita K; Chen, Shande

    2005-04-01

    Manatunga and Chen [A.K. Manatunga, S. Chen, Sample size estimation for survival outcomes in cluster-randomized studies with small cluster sizes, Biometrics 56 (2000) 616-621] proposed a method to estimate sample size and power for cluster-randomized studies where the primary outcome variable was survival time. The sample size formula was constructed by considering a bivariate marginal distribution (Clayton-Oakes model) with univariate exponential marginal distributions. In this paper, a user-friendly FORTRAN 90 program was provided to implement this method and a simple example was used to illustrate the features of the program.

  19. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    PubMed

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-01-01

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system. PMID:25494350

  20. The effects of composition, temperature and sample size on the sintering of chem-prep high field varistors.

    SciTech Connect

    Garino, Terry J.

    2007-09-01

    The sintering behavior of Sandia chem-prep high field varistor materials was studied using techniques including in situ shrinkage measurements, optical and scanning electron microscopy and x-ray diffraction. A thorough literature review of phase behavior, sintering and microstructure in Bi{sub 2}O{sub 3}-ZnO varistor systems is included. The effects of Bi{sub 2}O{sub 3} content (from 0.25 to 0.56 mol%) and of sodium doping level (0 to 600 ppm) on the isothermal densification kinetics was determined between 650 and 825 C. At {ge} 750 C samples with {ge}0.41 mol% Bi{sub 2}O{sub 3} have very similar densification kinetics, whereas samples with {le}0.33 mol% begin to densify only after a period of hours at low temperatures. The effect of the sodium content was greatest at {approx}700 C for standard 0.56 mol% Bi{sub 2}O{sub 3} and was greater in samples with 0.30 mol% Bi{sub 2}O{sub 3} than for those with 0.56 mol%. Sintering experiments on samples of differing size and shape found that densification decreases and mass loss increases with increasing surface area to volume ratio. However, these two effects have different causes: the enhancement in densification as samples increase in size appears to be caused by a low oxygen internal atmosphere that develops whereas the mass loss is due to the evaporation of bismuth oxide. In situ XRD experiments showed that the bismuth is initially present as an oxycarbonate that transforms to metastable {beta}-Bi{sub 2}O{sub 3} by 400 C. At {approx}650 C, coincident with the onset of densification, the cubic binary phase, Bi{sub 38}ZnO{sub 58} forms and remains stable to >800 C, indicating that a eutectic liquid does not form during normal varistor sintering ({approx}730 C). Finally, the formation and morphology of bismuth oxide phase regions that form on the varistors surfaces during slow cooling were studied.

  1. Reversible phospholipid nanogels for deoxyribonucleic acid fragment size determinations up to 1500 base pairs and integrated sample stacking.

    PubMed

    Durney, Brandon C; Bachert, Beth A; Sloane, Hillary S; Lukomski, Slawomir; Landers, James P; Holland, Lisa A

    2015-06-23

    Phospholipid additives are a cost-effective medium to separate deoxyribonucleic acid (DNA) fragments and possess a thermally-responsive viscosity. This provides a mechanism to easily create and replace a highly viscous nanogel in a narrow bore capillary with only a 10°C change in temperature. Preparations composed of dimyristoyl-sn-glycero-3-phosphocholine (DMPC) and 1,2-dihexanoyl-sn-glycero-3-phosphocholine (DHPC) self-assemble, forming structures such as nanodisks and wormlike micelles. Factors that influence the morphology of a particular DMPC-DHPC preparation include the concentration of lipid in solution, the temperature, and the ratio of DMPC and DHPC. It has previously been established that an aqueous solution containing 10% phospholipid with a ratio of [DMPC]/[DHPC]=2.5 separates DNA fragments with nearly single base resolution for DNA fragments up to 500 base pairs in length, but beyond this size the resolution decreases dramatically. A new DMPC-DHPC medium is developed to effectively separate and size DNA fragments up to 1500 base pairs by decreasing the total lipid concentration to 2.5%. A 2.5% phospholipid nanogel generates a resolution of 1% of the DNA fragment size up to 1500 base pairs. This increase in the upper size limit is accomplished using commercially available phospholipids at an even lower material cost than is achieved with the 10% preparation. The separation additive is used to evaluate size markers ranging between 200 and 1500 base pairs in order to distinguish invasive strains of Streptococcus pyogenes and Aspergillus species by harnessing differences in gene sequences of collagen-like proteins in these organisms. For the first time, a reversible stacking gel is integrated in a capillary sieving separation by utilizing the thermally-responsive viscosity of these self-assembled phospholipid preparations. A discontinuous matrix is created that is composed of a cartridge of highly viscous phospholipid assimilated into a separation matrix

  2. Ion balances of size-resolved tropospheric aerosol samples: implications for the acidity and atmospheric processing of aerosols

    NASA Astrophysics Data System (ADS)

    Kerminen, Veli-Matti; Hillamo, Risto; Teinilä, Kimmo; Pakkanen, Tuomo; Allegrini, Ivo; Sparapani, Roberto

    A large set of size-resolved aerosol samples was inspected with regard to their ion balance to shed light on how the aerosol acidity changes with particle size in the lower troposphere and what implications this might have for the atmospheric processing of aerosols. Quite different behaviour between the remote and more polluted environments could be observed. At the remote sites, practically the whole accumulation mode had cation-to-anion ratios clearly below unity, indicating that these particles were quite acidic. The supermicron size range was considerably less acidic and may in some cases have been close to neutral or even alkaline. An interesting feature common to the remote sites was a clear jump in the cation-to-anion ratio when going from the accumulation to the Aitken mode. The most likely reason for this was cloud processing which, via in-cloud sulphate production, makes the smallest accumulation-mode particles more acidic than the non-activated Aitken-mode particles. A direct consequence of the less acidic nature of the Aitken mode is that it can take up semi-volatile, water-soluble gases much easier than the accumulation mode. This feature may have significant implications for atmospheric cloud condensation nuclei production in remote environments. In rural and urban locations, the cation-to-anion ratio was close to unity over most of the accumulation mode, but increased significantly when going to either larger or smaller particle sizes. The high cation-to-anion ratios in the supermicron size range were ascribed to carbonate associated with mineral dust. The ubiquitous presence of carbonate in these particles indicates that they were neutral or alkaline, making them good sites for heterogeneous reactions involving acidic trace gases. The high cation-to-anion ratios in the Aitken mode suggest that these particles contained some water-soluble anions not detected by our chemical analysis. This is worth keeping in mind when investigating the hygroscopic

  3. Statistical Analysis of a Large Sample Size Pyroshock Test Data Set Including Post Flight Data Assessment. Revision 1

    NASA Technical Reports Server (NTRS)

    Hughes, William O.; McNelis, Anne M.

    2010-01-01

    The Earth Observing System (EOS) Terra spacecraft was launched on an Atlas IIAS launch vehicle on its mission to observe planet Earth in late 1999. Prior to launch, the new design of the spacecraft's pyroshock separation system was characterized by a series of 13 separation ground tests. The analysis methods used to evaluate this unusually large amount of shock data will be discussed in this paper, with particular emphasis on population distributions and finding statistically significant families of data, leading to an overall shock separation interface level. The wealth of ground test data also allowed a derivation of a Mission Assurance level for the flight. All of the flight shock measurements were below the EOS Terra Mission Assurance level thus contributing to the overall success of the EOS Terra mission. The effectiveness of the statistical methodology for characterizing the shock interface level and for developing a flight Mission Assurance level from a large sample size of shock data is demonstrated in this paper.

  4. Pulse Stripping Analysis: A Technique for Determination of Some Metals in Aerosols and Other Limited Size Samples

    NASA Technical Reports Server (NTRS)

    Parry, Edward P.; Hern, Don H.

    1971-01-01

    A technique for determining lead with a detection limit down to a nanogram on limited size samples is described. The technique is an electrochemical one and involves pre-concentration of the metal species in a mercury drop. Although the emphasis in this paper is on the determination of lead, many metal ion species which are reducible to the metal at an electrode are equally determinable. A technique called pulse polarography is proposed to determine the metals in the drop and this technique is discussed and is compared with other techniques. Other approaches for determination of lead are also compared. Some data are also reported for the lead content of Ventura County particulates. The characterization of lead species by solubility parameters is discussed.

  5. The analysis of various size, visually selected and density and magnetically separated fractions of Luna 16 and 20 samples

    NASA Technical Reports Server (NTRS)

    Eglinton, G.; Gowar, A. P.; Jull, A. J. T.; Pillinger, C. T.; Agrell, S. O.; Agrell, J. E.; Long, J. V. P.; Bowie, S. H. U.; Simpson, P. R.; Beckinsale, R. D.

    1977-01-01

    Samples of Luna 16 and 20 have been separated according to size, visual appearance, density, and magnetic susceptibility. Selected aliquots were examined in eight British laboratories. The studies included mineralogy and petrology, selenochronology, magnetic characteristics, Mossbauer spectroscopy, oxygen isotope ratio determinations, cosmic ray track and thermoluminescence investigations, and carbon chemistry measurements. Luna 16 and 20 are typically mare and highland soils, comparing well with their Apollo counterparts, Apollo 11 and 16, respectively. Both soils are very mature (high free iron, carbide, and methane and cosmogenic Ar), while Luna 16, but not Luna 20, is characterized by a high content of glassy materials. An aliquot of anorthosite fragments, handpicked from Luna 20, had a gas retention age of about 4.3 plus or minus 0.1 Gy.

  6. Accuracy in Parameter Estimation for the Root Mean Square Error of Approximation: Sample Size Planning for Narrow Confidence Intervals.

    PubMed

    Kelley, Ken; Lai, Keke

    2011-02-01

    The root mean square error of approximation (RMSEA) is one of the most widely reported measures of misfit/fit in applications of structural equation modeling. When the RMSEA is of interest, so too should be the accompanying confidence interval. A narrow confidence interval reveals that the plausible parameter values are confined to a relatively small range at the specified level of confidence. The accuracy in parameter estimation approach to sample size planning is developed for the RMSEA so that the confidence interval for the population RMSEA will have a width whose expectation is sufficiently narrow. Analytic developments are shown to work well with a Monte Carlo simulation study. Freely available computer software is developed so that the methods discussed can be implemented. The methods are demonstrated for a repeated measures design where the way in which social relationships and initial depression influence coping strategies and later depression are examined.

  7. Effect of sampling methods, effective population size and migration rate estimation in Glossina palpalis palpalis from Cameroon.

    PubMed

    Mélachio, Tanekou Tito Trésor; Njiokou, Flobert; Ravel, Sophie; Simo, Gustave; Solano, Philippe; De Meeûs, Thierry

    2015-07-01

    Human and animal trypanosomiases are two major constraints to development in Africa. These diseases are mainly transmitted by tsetse flies in particular by Glossina palpalis palpalis in Western and Central Africa. To set up an effective vector control campaign, prior population genetics studies have proved useful. Previous studies on population genetics of G. p. palpalis using microsatellite loci showed high heterozygote deficits, as compared to Hardy-Weinberg expectations, mainly explained by the presence of null alleles and/or the mixing of individuals belonging to several reproductive units (Wahlund effect). In this study we implemented a system of trapping, consisting of a central trap and two to four satellite traps around the central one to evaluate a possible role of the Wahlund effect in tsetse flies from three Cameroon human and animal African trypanosomiases foci (Campo, Bipindi and Fontem). We also estimated effective population sizes and dispersal. No difference was observed between the values of allelic richness, genetic diversity and Wright's FIS, in the samples from central and from satellite traps, suggesting an absence of Wahlund effect. Partitioning of the samples with Bayesian methods showed numerous clusters of 2-3 individuals as expected from a population at demographic equilibrium with two expected offspring per reproducing female. As previously shown, null alleles appeared as the most probable factor inducing these heterozygote deficits in these populations. Effective population sizes varied from 80 to 450 individuals while immigration rates were between 0.05 and 0.43, showing substantial genetic exchanges between different villages within a focus. These results suggest that the "suppression" with establishment of physical barriers may be the best strategy for a vector control campaign in this forest context.

  8. Endogenous testosterone concentration, mental rotation, and size of the corpus callosum in a sample of young Hungarian women.

    PubMed

    Karádi, Kázmér; Kállai, János; Kövér, Ferenc; Nemes, János; Makány, Tamás; Nagy, Ferenc

    2006-04-01

    In the present study brain laterality, hemispheric communication, and mental rotation performance were examined. A sample of 33 women were tested for a possible linear relationship of testosterone level and mental rotation with structural background of the brain. Subjects with a smaller splenial area of corpus callosum tended to have lower levels of testosterone (r =.37, p<.05). However, there were no significant differences in mean scores of mental rotation of object and hand between groups with high and low levels of testosterone. There was a significant difference in relative size of the 6th area (slice) of the corpus callosum between groups with good and poor scores on mental rotation of an object and also in relative size of the 4th and 5th slices of the corpus callosum between groups on mental rotation of the hand. The good and poor scorers' show different relations with the measures of the corpus. The mental rotation of hand was associated with the parietal areas of the corpus callosum, while the mental rotation of object was associated only with the occipital area. These observations suggest that higher testosterone levels may be associated with a larger splenial area, which represents an important connection between the parieto-occipitocortical areas involved in activation of mental images. Further srudy is encouraged.

  9. Is a vegetarian diet adequate for children.

    PubMed

    Hackett, A; Nathan, I; Burgess, L

    1998-01-01

    The number of people who avoid eating meat is growing, especially among young people. Benefits to health from a vegetarian diet have been reported in adults but it is not clear to what extent these benefits are due to diet or to other aspects of lifestyles. In children concern has been expressed concerning the adequacy of vegetarian diets especially with regard to growth. The risks/benefits seem to be related to the degree of restriction of he diet; anaemia is probably both the main and the most serious risk but this also applies to omnivores. Vegan diets are more likely to be associated with malnutrition, especially if the diets are the result of authoritarian dogma. Overall, lacto-ovo-vegetarian children consume diets closer to recommendations than omnivores and their pre-pubertal growth is at least as good. The simplest strategy when becoming vegetarian may involve reliance on vegetarian convenience foods which are not necessarily superior in nutritional composition. The vegetarian sector of the food industry could do more to produce foods closer to recommendations. Vegetarian diets can be, but are not necessarily, adequate for children, providing vigilance is maintained, particularly to ensure variety. Identical comments apply to omnivorous diets. Three threats to the diet of children are too much reliance on convenience foods, lack of variety and lack of exercise.

  10. Non-exponential nature of calorimetric and other relaxations: effects of 2 nm-size solutes, loss of translational diffusion, isomer specificity, and sample size.

    PubMed

    Johari, G P; Khouri, J

    2013-03-28

    Certain distributions of relaxation times can be described in terms of a non-exponential response parameter, β, of value between 0 and 1. Both β and the relaxation time, τ0, of a material depend upon the probe used for studying its dynamics and the value of β is qualitatively related to the non-Arrhenius variation of viscosity and τ0. A solute adds to the diversity of an intermolecular environment and is therefore expected to reduce β, i.e., to increase the distribution and to change τ0. We argue that the calorimetric value β(cal) determined from the specific heat [Cp = T(dS∕dT)p] data is a more appropriate measure of the distribution of relaxation times arising from configurational fluctuations than β determined from other properties, and report a study of β(cal) of two sets of binary mixtures, each containing a different molecule of ∼2 nm size. We find that β(cal) changes monotonically with the composition, i.e., solute molecules modify the nano-scale composition and may increase or decrease τ0, but do not always decrease β(cal). (Plots of β(cal) against the composition do not show a minimum.) We also analyze the data from the literature, and find that (i) β(cal) of an orientationally disordered crystal is less than that of its liquid, (ii) β(cal) varies with the isomer's nature, and chiral centers in a molecule decrease β(cal), and (iii) β(cal) decreases when a sample's thickness is decreased to the nm-scale. After examining the difference between β(cal) and β determined from other properties we discuss the consequences of our findings for theories of non-exponential response, and suggest that studies of β(cal) may be more revealing of structure-freezing than studies of the non-Arrhenius behavior. On the basis of previous reports that β → 1 for dielectric relaxation of liquids of centiPoise viscosity observed at GHz frequencies, we argue that its molecular mechanism is the same as that of the Johari-Goldstein (JG) relaxation. Its

  11. Non-exponential nature of calorimetric and other relaxations: Effects of 2 nm-size solutes, loss of translational diffusion, isomer specificity, and sample size

    NASA Astrophysics Data System (ADS)

    Johari, G. P.; Khouri, J.

    2013-03-01

    Certain distributions of relaxation times can be described in terms of a non-exponential response parameter, β, of value between 0 and 1. Both β and the relaxation time, τ0, of a material depend upon the probe used for studying its dynamics and the value of β is qualitatively related to the non-Arrhenius variation of viscosity and τ0. A solute adds to the diversity of an intermolecular environment and is therefore expected to reduce β, i.e., to increase the distribution and to change τ0. We argue that the calorimetric value βcal determined from the specific heat [Cp = T(dS/dT)p] data is a more appropriate measure of the distribution of relaxation times arising from configurational fluctuations than β determined from other properties, and report a study of βcal of two sets of binary mixtures, each containing a different molecule of ˜2 nm size. We find that βcal changes monotonically with the composition, i.e., solute molecules modify the nano-scale composition and may increase or decrease τ0, but do not always decrease βcal. (Plots of βcal against the composition do not show a minimum.) We also analyze the data from the literature, and find that (i) βcal of an orientationally disordered crystal is less than that of its liquid, (ii) βcal varies with the isomer's nature, and chiral centers in a molecule decrease βcal, and (iii) βcal decreases when a sample's thickness is decreased to the nm-scale. After examining the difference between βcal and β determined from other properties we discuss the consequences of our findings for theories of non-exponential response, and suggest that studies of βcal may be more revealing of structure-freezing than studies of the non-Arrhenius behavior. On the basis of previous reports that β → 1 for dielectric relaxation of liquids of centiPoise viscosity observed at GHz frequencies, we argue that its molecular mechanism is the same as that of the Johari-Goldstein (JG) relaxation. Its spectrum becomes broader on

  12. Monte Carlo sampling can be used to determine the size and shape of the steady-state flux space.

    PubMed

    Wiback, Sharon J; Famili, Iman; Greenberg, Harvey J; Palsson, Bernhard Ø

    2004-06-21

    Constraint-based modeling results in a convex polytope that defines a solution space containing all possible steady-state flux distributions. The properties of this polytope have been studied extensively using linear programming to find the optimal flux distribution under various optimality conditions and convex analysis to define its extreme pathways (edges) and elementary modes. The work presented herein further studies the steady-state flux space by defining its hyper-volume. In low dimensions (i.e. for small sample networks), exact volume calculation algorithms were used. However, due to the #P-hard nature of the vertex enumeration and volume calculation problem in high dimensions, random Monte Carlo sampling was used to characterize the relative size of the solution space of the human red blood cell metabolic network. Distributions of the steady-state flux levels for each reaction in the metabolic network were generated to show the range of flux values for each reaction in the polytope. These results give insight into the shape of the high-dimensional solution space. The value of measuring uptake and secretion rates in shrinking the steady-state flux solution space is illustrated through singular value decomposition of the randomly sampled points. The V(max) of various reactions in the network are varied to determine the sensitivity of the solution space to the maximum capacity constraints. The methods developed in this study are suitable for testing the implication of additional constraints on a metabolic network system and can be used to explore the effects of single nucleotide polymorphisms (SNPs) on network capabilities. PMID:15178193

  13. Effects of sample size, number of markers, and allelic richness on the detection of spatial genetic pattern

    USGS Publications Warehouse

    Landguth, Erin L.; Gedy, Bradley C.; Oyler-McCance, Sara J.; Garey, Andrew L.; Emel, Sarah L.; Mumma, Matthew; Wagner, Helene H.; Fortin, Marie-Josée; Cushman, Samuel A.

    2012-01-01

    The influence of study design on the ability to detect the effects of landscape pattern on gene flow is one of the most pressing methodological gaps in landscape genetic research. To investigate the effect of study design on landscape genetics inference, we used a spatially-explicit, individual-based program to simulate gene flow in a spatially continuous population inhabiting a landscape with gradual spatial changes in resistance to movement. We simulated a wide range of combinations of number of loci, number of alleles per locus and number of individuals sampled from the population. We assessed how these three aspects of study design influenced the statistical power to successfully identify the generating process among competing hypotheses of isolation-by-distance, isolation-by-barrier, and isolation-by-landscape resistance using a causal modelling approach with partial Mantel tests. We modelled the statistical power to identify the generating process as a response surface for equilibrium and non-equilibrium conditions after introduction of isolation-by-landscape resistance. All three variables (loci, alleles and sampled individuals) affect the power of causal modelling, but to different degrees. Stronger partial Mantel r correlations between landscape distances and genetic distances were found when more loci were used and when loci were more variable, which makes comparisons of effect size between studies difficult. Number of individuals did not affect the accuracy through mean equilibrium partial Mantel r, but larger samples decreased the uncertainty (increasing the precision) of equilibrium partial Mantel r estimates. We conclude that amplifying more (and more variable) loci is likely to increase the power of landscape genetic inferences more than increasing number of individuals.

  14. Effects of sample size, number of markers, and allelic richness on the detection of spatial genetic pattern

    USGS Publications Warehouse

    Landguth, E.L.; Fedy, B.C.; Oyler-McCance, S.J.; Garey, A.L.; Emel, S.L.; Mumma, M.; Wagner, H.H.; Fortin, M.-J.; Cushman, S.A.

    2012-01-01

    The influence of study design on the ability to detect the effects of landscape pattern on gene flow is one of the most pressing methodological gaps in landscape genetic research. To investigate the effect of study design on landscape genetics inference, we used a spatially-explicit, individual-based program to simulate gene flow in a spatially continuous population inhabiting a landscape with gradual spatial changes in resistance to movement. We simulated a wide range of combinations of number of loci, number of alleles per locus and number of individuals sampled from the population. We assessed how these three aspects of study design influenced the statistical power to successfully identify the generating process among competing hypotheses of isolation-by-distance, isolation-by-barrier, and isolation-by-landscape resistance using a causal modelling approach with partial Mantel tests. We modelled the statistical power to identify the generating process as a response surface for equilibrium and non-equilibrium conditions after introduction of isolation-by-landscape resistance. All three variables (loci, alleles and sampled individuals) affect the power of causal modelling, but to different degrees. Stronger partial Mantel r correlations between landscape distances and genetic distances were found when more loci were used and when loci were more variable, which makes comparisons of effect size between studies difficult. Number of individuals did not affect the accuracy through mean equilibrium partial Mantel r, but larger samples decreased the uncertainty (increasing the precision) of equilibrium partial Mantel r estimates. We conclude that amplifying more (and more variable) loci is likely to increase the power of landscape genetic inferences more than increasing number of individuals. ?? 2011 Blackwell Publishing Ltd.

  15. Adequate mathematical modelling of environmental processes

    NASA Astrophysics Data System (ADS)

    Chashechkin, Yu. D.

    2012-04-01

    In environmental observations and laboratory visualization both large scale flow components like currents, jets, vortices, waves and a fine structure are registered (different examples are given). The conventional mathematical modeling both analytical and numerical is directed mostly on description of energetically important flow components. The role of a fine structures is still remains obscured. A variety of existing models makes it difficult to choose the most adequate and to estimate mutual assessment of their degree of correspondence. The goal of the talk is to give scrutiny analysis of kinematics and dynamics of flows. A difference between the concept of "motion" as transformation of vector space into itself with a distance conservation and the concept of "flow" as displacement and rotation of deformable "fluid particles" is underlined. Basic physical quantities of the flow that are density, momentum, energy (entropy) and admixture concentration are selected as physical parameters defined by the fundamental set which includes differential D'Alembert, Navier-Stokes, Fourier's and/or Fick's equations and closing equation of state. All of them are observable and independent. Calculations of continuous Lie groups shown that only the fundamental set is characterized by the ten-parametric Galilelian groups reflecting based principles of mechanics. Presented analysis demonstrates that conventionally used approximations dramatically change the symmetries of the governing equations sets which leads to their incompatibility or even degeneration. The fundamental set is analyzed taking into account condition of compatibility. A high order of the set indicated on complex structure of complete solutions corresponding to physical structure of real flows. Analytical solutions of a number problems including flows induced by diffusion on topography, generation of the periodic internal waves a compact sources in week-dissipative media as well as numerical solutions of the same

  16. Reexamining Sample Size Requirements for Multivariate, Abundance-Based Community Research: When Resources are Limited, the Research Does Not Have to Be.

    PubMed

    Forcino, Frank L; Leighton, Lindsey R; Twerdy, Pamela; Cahill, James F

    2015-01-01

    Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment.

  17. Sediment Grain Size Measurements: Is There a Differenc Between Digested and Un-digested Samples? And Does the Organic Carbon of the Sample Play a Role

    EPA Science Inventory

    Grain size is a physical measurement commonly made in the analysis of many benthic systems. Grain size influences benthic community composition, can influence contaminant loading and can indicate the energy regime of a system. We have recently investigated the relationship betw...

  18. The attention-weighted sample-size model of visual short-term memory: Attention capture predicts resource allocation and memory load.

    PubMed

    Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren

    2016-09-01

    We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model.

  19. The attention-weighted sample-size model of visual short-term memory: Attention capture predicts resource allocation and memory load.

    PubMed

    Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren

    2016-09-01

    We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. PMID:27494766

  20. The Clark Phase-able Sample Size Problem: Long-Range Phasing and Loss of Heterozygosity in GWAS

    NASA Astrophysics Data System (ADS)

    Halldórsson, Bjarni V.; Aguiar, Derek; Tarpine, Ryan; Istrail, Sorin

    A phase transition is taking place today. The amount of data generated by genome resequencing technologies is so large that in some cases it is now less expensive to repeat the experiment than to store the information generated by the experiment. In the next few years it is quite possible that millions of Americans will have been genotyped. The question then arises of how to make the best use of this information and jointly estimate the haplotypes of all these individuals. The premise of the paper is that long shared genomic regions (or tracts) are unlikely unless the haplotypes are identical by descent (IBD), in contrast to short shared tracts which may be identical by state (IBS). Here we estimate for populations, using the US as a model, what sample size of genotyped individuals would be necessary to have sufficiently long shared haplotype regions (tracts) that are identical by descent (IBD), at a statistically significant level. These tracts can then be used as input for a Clark-like phasing method to obtain a complete phasing solution of the sample. We estimate in this paper that for a population like the US and about 1% of the people genotyped (approximately 2 million), tracts of about 200 SNPs long are shared between pairs of individuals IBD with high probability which assures the Clark method phasing success. We show on simulated data that the algorithm will get an almost perfect solution if the number of individuals being SNP arrayed is large enough and the correctness of the algorithm grows with the number of individuals being genotyped.

  1. Conditional and Unconditional Tests (and Sample Size) Based on Multiple Comparisons for Stratified 2 × 2 Tables

    PubMed Central

    Martín Andrés, A.; Herranz Tejedor, I.; Álvarez Hernández, M.

    2015-01-01

    The Mantel-Haenszel test is the most frequent asymptotic test used for analyzing stratified 2 × 2 tables. Its exact alternative is the test of Birch, which has recently been reconsidered by Jung. Both tests have a conditional origin: Pearson's chi-squared test and Fisher's exact test, respectively. But both tests have the same drawback that the result of global test (the stratified test) may not be compatible with the result of individual tests (the test for each stratum). In this paper, we propose to carry out the global test using a multiple comparisons method (MC method) which does not have this disadvantage. By refining the method (MCB method) an alternative to the Mantel-Haenszel and Birch tests may be obtained. The new MC and MCB methods have the advantage that they may be applied from an unconditional view, a methodology which until now has not been applied to this problem. We also propose some sample size calculation methods. PMID:26075012

  2. Statistical process control charts for attribute data involving very large sample sizes: a review of problems and solutions.

    PubMed

    Mohammed, Mohammed A; Panesar, Jagdeep S; Laney, David B; Wilson, Richard

    2013-04-01

    The use of statistical process control (SPC) charts in healthcare is increasing. The primary purpose of SPC is to distinguish between common-cause variation which is attributable to the underlying process, and special-cause variation which is extrinsic to the underlying process. This is important because improvement under common-cause variation requires action on the process, whereas special-cause variation merits an investigation to first find the cause. Nonetheless, when dealing with attribute or count data (eg, number of emergency admissions) involving very large sample sizes, traditional SPC charts often produce tight control limits with most of the data points appearing outside the control limits. This can give a false impression of common and special-cause variation, and potentially misguide the user into taking the wrong actions. Given the growing availability of large datasets from routinely collected databases in healthcare, there is a need to present a review of this problem (which arises because traditional attribute charts only consider within-subgroup variation) and its solutions (which consider within and between-subgroup variation), which involve the use of the well-established measurements chart and the more recently developed attribute charts based on Laney's innovative approach. We close by making some suggestions for practice. PMID:23365140

  3. Are shear force methods adequately reported?

    PubMed

    Holman, Benjamin W B; Fowler, Stephanie M; Hopkins, David L

    2016-09-01

    This study aimed to determine the detail to which shear force (SF) protocols and methods have been reported in the scientific literature between 2009 and 2015. Articles (n=734) published in peer-reviewed animal and food science journals and limited to only those testing the SF of unprocessed and non-fabricated mammal meats were evaluated. It was found that most of these SF articles originated in Europe (35.3%), investigated bovine species (49.0%), measured m. longissimus samples (55.2%), used tenderometers manufactured by Instron (31.2%), and equipped with Warner-Bratzler blades (68.8%). SF samples were also predominantly thawed prior to cooking (37.1%) and cooked sous vide, using a water bath (50.5%). Information pertaining to blade crosshead speed (47.5%), recorded SF resistance (56.7%), muscle fibre orientation when tested (49.2%), sub-section or core dimension (21.8%), end-point temperature (29.3%), and other factors contributing to SF variation were often omitted. This base failure diminishes repeatability and accurate SF interpretation, and must therefore be rectified. PMID:27107727

  4. Are shear force methods adequately reported?

    PubMed

    Holman, Benjamin W B; Fowler, Stephanie M; Hopkins, David L

    2016-09-01

    This study aimed to determine the detail to which shear force (SF) protocols and methods have been reported in the scientific literature between 2009 and 2015. Articles (n=734) published in peer-reviewed animal and food science journals and limited to only those testing the SF of unprocessed and non-fabricated mammal meats were evaluated. It was found that most of these SF articles originated in Europe (35.3%), investigated bovine species (49.0%), measured m. longissimus samples (55.2%), used tenderometers manufactured by Instron (31.2%), and equipped with Warner-Bratzler blades (68.8%). SF samples were also predominantly thawed prior to cooking (37.1%) and cooked sous vide, using a water bath (50.5%). Information pertaining to blade crosshead speed (47.5%), recorded SF resistance (56.7%), muscle fibre orientation when tested (49.2%), sub-section or core dimension (21.8%), end-point temperature (29.3%), and other factors contributing to SF variation were often omitted. This base failure diminishes repeatability and accurate SF interpretation, and must therefore be rectified.

  5. Visuospatial ability, accuracy of size estimation, and bulimic disturbance in a noneating-disordered college sample: a neuropsychological analysis.

    PubMed

    Thompson, J K; Spana, R E

    1991-08-01

    The relationship between visuospatial ability and size accuracy in perception was assessed in 69 normal college females. In general, correlations indicated small associations between visuospatial defects and size overestimation and little relationship between visuospatial ability and level of bulimic disturbance. Implications for research on the size overestimation of body image are addressed.

  6. Day and night variation in chemical composition and toxicological responses of size segregated urban air PM samples in a high air pollution situation

    NASA Astrophysics Data System (ADS)

    Jalava, P. I.; Wang, Q.; Kuuspalo, K.; Ruusunen, J.; Hao, L.; Fang, D.; Väisänen, O.; Ruuskanen, A.; Sippula, O.; Happo, M. S.; Uski, O.; Kasurinen, S.; Torvela, T.; Koponen, H.; Lehtinen, K. E. J.; Komppula, M.; Gu, C.; Jokiniemi, J.; Hirvonen, M.-R.

    2015-11-01

    Urban air particulate pollution is a known cause for adverse human health effects worldwide. China has encountered air quality problems in recent years due to rapid industrialization. Toxicological effects induced by particulate air pollution vary with particle sizes and season. However, it is not known how distinctively different photochemical activity and different emission sources during the day and the night affect the chemical composition of the PM size ranges and subsequently how it is reflected to the toxicological properties of the PM exposures. The particulate matter (PM) samples were collected in four different size ranges (PM10-2.5; PM2.5-1; PM1-0.2 and PM0.2) with a high volume cascade impactor. The PM samples were extracted with methanol, dried and thereafter used in the chemical and toxicological analyses. RAW264.7 macrophages were exposed to the particulate samples in four different doses for 24 h. Cytotoxicity, inflammatory parameters, cell cycle and genotoxicity were measured after exposure of the cells to particulate samples. Particles were characterized for their chemical composition, including ions, element and PAH compounds, and transmission electron microscopy (TEM) was used to take images of the PM samples. Chemical composition and the induced toxicological responses of the size segregated PM samples showed considerable size dependent differences as well as day to night variation. The PM10-2.5 and the PM0.2 samples had the highest inflammatory potency among the size ranges. Instead, almost all the PM samples were equally cytotoxic and only minor differences were seen in genotoxicity and cell cycle effects. Overall, the PM0.2 samples had the highest toxic potential among the different size ranges in many parameters. PAH compounds in the samples and were generally more abundant during the night than the day, indicating possible photo-oxidation of the PAH compounds due to solar radiation. This was reflected to different toxicity in the PM

  7. Diet- and Body Size-Related Attitudes and Behaviors Associated with Vitamin Supplement Use in a Representative Sample of Fourth-Grade Students in Texas

    ERIC Educational Resources Information Center

    George, Goldy C.; Hoelscher, Deanna M.; Nicklas, Theresa A.; Kelder, Steven H.

    2009-01-01

    Objective: To examine diet- and body size-related attitudes and behaviors associated with supplement use in a representative sample of fourth-grade students in Texas. Design: Cross-sectional data from the School Physical Activity and Nutrition study, a probability-based sample of schoolchildren. Children completed a questionnaire that assessed…

  8. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

    SciTech Connect

    Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.

    2014-04-15

    Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same

  9. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

    PubMed Central

    Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.

    2014-01-01

    Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same

  10. Proposed regression equations for prediction of the size of unerupted permanent canines and premolars in Yemeni sample

    PubMed Central

    Al-Kabab, FA; Ghoname, NA; Banabilh, SM

    2014-01-01

    Objective: The aim was to formulate a prediction regression equation for Yemeni and to compare it with Moyer's method for the prediction of the size of the un-erupted permanent canines and premolars. Subjects and Methods: Measurements of mesio-distal width of four permanent mandibular incisors, as well as canines and premolars in both arches were obtained from a sample of 400 school children aged 12-14 years old (13.80 ± 0.42 standard deviation) using electronic digital calliper. The data were subjected to statistical and linear regression analysis and then compared with Moyer's prediction tables. Results: The result showed that the mean mesio-distal tooth widths of the canines and premolars in the maxillary arch were significantly larger in boys than girls (P < 0.001), while, in the mandibular arch, only lateral incisors and canines were also significantly larger in boys than in girls (P < 0.001). Regression equations for the maxillary arch (boys, Y = 13.55 + 0.29X; girls, Y = 14.04 + 0.25X) and the mandibular arch (boys, Y = 9.97 + 0.40X; girls, Y = 9.56 + 0.41X) were formulated and used to develop new probability tables following the Moyer's method. Significant differences (P < 0.05) were found between the present study predicted widths and the Moyer's tables in almost all percentile levels, including the recommended 50% and 75% levels. Conclusions: The Moyer's probability tables significantly overestimate the mesio-distal widths of the un-erupted permanent canine and premolars of Yemeni in almost all percentile levels, including the commonly used 50% and 75% levels. Therefore, it was suggested with caution that the proposed prediction regression equations and tables developed in the present study could be considered as an alternative and more precise method for mixed dentition space analysis in Yemeni. PMID:25143930

  11. Adequate histologic sectioning of prostate needle biopsies.

    PubMed

    Bostwick, David G; Kahane, Hillel

    2013-08-01

    No standard method exists for sampling prostate needle biopsies, although most reports claim to embed 3 cores per block and obtain 3 slices from each block. This study was undertaken to determine the extent of histologic sectioning necessary for optimal examination of prostate biopsies. We prospectively compared the impact on cancer yield of submitting 1 biopsy core per cassette (biopsies from January 2010) with 3 cores per cassette (biopsies from August 2010) from a large national reference laboratory. Between 6 and 12 slices were obtained with the former 1-core method, resulting in 3 to 6 slices being placed on each of 2 slides; for the latter 3-core method, a limit of 6 slices was obtained, resulting in 3 slices being place on each of 2 slides. A total of 6708 sets of 12 to 18 core biopsies were studied, including 3509 biopsy sets from the 1-biopsy-core-per-cassette group (January 2010) and 3199 biopsy sets from the 3-biopsy-cores-percassette group (August 2010). The yield of diagnoses was classified as benign, atypical small acinar proliferation, high-grade prostatic intraepithelial neoplasia, and cancer and was similar with the 2 methods: 46.2%, 8.2%, 4.5%, and 41.1% and 46.7%, 6.3%, 4.4%, and 42.6%, respectively (P = .02). Submission of 1 core or 3 cores per cassette had no effect on the yield of atypical small acinar proliferation, prostatic intraepithelial neoplasia, or cancer in prostate needle biopsies. Consequently, we recommend submission of 3 cores per cassette to minimize labor and cost of processing. PMID:23764163

  12. Sample size calculations for intervention trials in primary care randomizing by primary care group: an empirical illustration from one proposed intervention trial.

    PubMed

    Eldridge, S; Cryer, C; Feder, G; Underwood, M

    2001-02-15

    Because of the central role of the general practice in the delivery of British primary care, intervention trials in primary care often use the practice as the unit of randomization. The creation of primary care groups (PCGs) in April 1999 changed the organization of primary care and the commissioning of secondary care services. PCGs will directly affect the organization and delivery of primary, secondary and social care services. The PCG therefore becomes an appropriate target for organizational and educational interventions. Trials testing these interventions should involve randomization by PCG. This paper discusses the sample size required for a trial in primary care assessing the effect of a falls prevention programme among older people. In this trial PCGs will be randomized. The sample size calculations involve estimating intra-PCG correlation in primary outcome: fractured femur rate for those 65 years and over. No data on fractured femur rate were available at PCG level. PCGs are, however, similar in size and often coterminous with local authorities. Therefore, intra-PCG correlation in fractured femur rate was estimated from the intra-local authority correlation calculated from routine data. Three alternative trial designs are considered. In the first design, PCGs are selected for inclusion in the trial from the total population of England (eight regions). In the second design, PCGs are selected from two regions only. The third design is similar to the second except that PCGs are stratified by region and baseline value of fracture rate. Intracluster correlation is estimated for each of these designs using two methods: an approximation which assumes cluster sizes are equal and an alternative method which takes account of the fact that cluster sizes vary. Estimates of sample size required vary between 26 and 7 PCGs in each intervention group, depending on the trial design and the method used to calculate sample size. Not unexpectedly, stratification by baseline

  13. Approximate Confidence Intervals for Standardized Effect Sizes in the Two-Independent and Two-Dependent Samples Design

    ERIC Educational Resources Information Center

    Viechtbauer, Wolfgang

    2007-01-01

    Standardized effect sizes and confidence intervals thereof are extremely useful devices for comparing results across different studies using scales with incommensurable units. However, exact confidence intervals for standardized effect sizes can usually be obtained only via iterative estimation procedures. The present article summarizes several…

  14. Use of Homogeneously-Sized Carbon Steel Ball Bearings to Study Microbially-Influenced Corrosion in Oil Field Samples.

    PubMed

    Voordouw, Gerrit; Menon, Priyesh; Pinnock, Tijan; Sharma, Mohita; Shen, Yin; Venturelli, Amanda; Voordouw, Johanna; Sexton, Aoife

    2016-01-01

    Microbially-influenced corrosion (MIC) contributes to the general corrosion rate (CR), which is typically measured with carbon steel coupons. Here we explore the use of carbon steel ball bearings, referred to as beads (55.0 ± 0.3 mg; Ø = 0.238 cm), for determining CRs. CRs for samples from an oil field in Oceania incubated with beads were determined by the weight loss method, using acid treatment to remove corrosion products. The release of ferrous and ferric iron was also measured and CRs based on weight loss and iron determination were in good agreement. Average CRs were 0.022 mm/yr for eight produced waters with high numbers (10(5)/ml) of acid-producing bacteria (APB), but no sulfate-reducing bacteria (SRB). Average CRs were 0.009 mm/yr for five central processing facility (CPF) waters, which had no APB or SRB due to weekly biocide treatment and 0.036 mm/yr for 2 CPF tank bottom sludges, which had high numbers of APB (10(6)/ml) and SRB (10(8)/ml). Hence, corrosion monitoring with carbon steel beads indicated that biocide treatment of CPF waters decreased the CR, except where biocide did not penetrate. The CR for incubations with 20 ml of a produced water decreased from 0.061 to 0.007 mm/yr when increasing the number of beads from 1 to 40. CRs determined with beads were higher than those with coupons, possibly also due to a higher weight of iron per unit volume used in incubations with coupons. Use of 1 ml syringe columns, containing carbon steel beads, and injected with 10 ml/day of SRB-containing medium for 256 days gave a CR of 0.11 mm/yr under flow conditions. The standard deviation of the distribution of residual bead weights, a measure for the unevenness of the corrosion, increased with increasing CR. The most heavily corroded beads showed significant pitting. Hence the use of uniformly sized carbon steel beads offers new opportunities for screening and monitoring of corrosion including determination of the distribution of corrosion rates, which allows

  15. Use of Homogeneously-Sized Carbon Steel Ball Bearings to Study Microbially-Influenced Corrosion in Oil Field Samples

    PubMed Central

    Voordouw, Gerrit; Menon, Priyesh; Pinnock, Tijan; Sharma, Mohita; Shen, Yin; Venturelli, Amanda; Voordouw, Johanna; Sexton, Aoife

    2016-01-01

    Microbially-influenced corrosion (MIC) contributes to the general corrosion rate (CR), which is typically measured with carbon steel coupons. Here we explore the use of carbon steel ball bearings, referred to as beads (55.0 ± 0.3 mg; Ø = 0.238 cm), for determining CRs. CRs for samples from an oil field in Oceania incubated with beads were determined by the weight loss method, using acid treatment to remove corrosion products. The release of ferrous and ferric iron was also measured and CRs based on weight loss and iron determination were in good agreement. Average CRs were 0.022 mm/yr for eight produced waters with high numbers (105/ml) of acid-producing bacteria (APB), but no sulfate-reducing bacteria (SRB). Average CRs were 0.009 mm/yr for five central processing facility (CPF) waters, which had no APB or SRB due to weekly biocide treatment and 0.036 mm/yr for 2 CPF tank bottom sludges, which had high numbers of APB (106/ml) and SRB (108/ml). Hence, corrosion monitoring with carbon steel beads indicated that biocide treatment of CPF waters decreased the CR, except where biocide did not penetrate. The CR for incubations with 20 ml of a produced water decreased from 0.061 to 0.007 mm/yr when increasing the number of beads from 1 to 40. CRs determined with beads were higher than those with coupons, possibly also due to a higher weight of iron per unit volume used in incubations with coupons. Use of 1 ml syringe columns, containing carbon steel beads, and injected with 10 ml/day of SRB-containing medium for 256 days gave a CR of 0.11 mm/yr under flow conditions. The standard deviation of the distribution of residual bead weights, a measure for the unevenness of the corrosion, increased with increasing CR. The most heavily corroded beads showed significant pitting. Hence the use of uniformly sized carbon steel beads offers new opportunities for screening and monitoring of corrosion including determination of the distribution of corrosion rates, which allows

  16. The effect of wind direction on the observed size distribution of particle adsorbed polycyclic aromatic hydrocarbons on an inner city sampling site.

    PubMed

    Schnelle-Kreis, J; Jänsch, T; Wolf, K; Gebefügi, I; Kettrup, A

    1999-08-01

    An investigation of the variability in the size distribution of particle adsorbed polycyclic aromatic hydrocarbons (PAHs) on an inner city sampling site showed differences depending on the wind direction. Particle size distributions of PAHs from outdoor air sampling were measured in Munich from 1994 to 1997. The sampling site is located northeast of a crossing with heavy traffic and southwest of a large inner city park. Depending on the wind direction, three different size distributions of particle adsorbed PAHs were observed. The maximum PAH concentration on very small particles (geometric mean diameter 75 nm) was observed with wind from west to southwest coming directly from the crossing area or the roads with heavy traffic. The maximum PAH concentration on particles with geometric mean diameter of 260 nm was found on days with wind from the built-up area north of the sampling site. On particles with geometric mean diameter of 920 nm the maximum PAH concentration was found on days with main wind directions from northeast to east. On these days the wind is blowing from the direction of the city park nearby. The distribution of particle adsorbed PAHs within different particle size classes is substantially influenced by the distance of the sampling site from strong sources of PAH loaded particulate matter. PMID:11529136

  17. Rationalizing nanomaterial sizes measured by atomic force microscopy, flow field-flow fractionation, and dynamic light scattering: sample preparation, polydispersity, and particle structure.

    PubMed

    Baalousha, M; Lead, J R

    2012-06-01

    This study aims to rationalize the variability in the measured size of nanomaterials (NMs) by some of the most commonly applied techniques in the field of nano(eco)toxicology and environmental sciences, including atomic force microscopy (AFM), dynamic light scattering (DLS), and flow field-flow fractionation (FlFFF). A validated sample preparation procedure for size evaluation by AFM is presented, along with a quantitative explanation of the variability of measured sizes by FlFFF, AFM, and DLS. The ratio of the z-average hydrodynamic diameter (d(DLS)) by DLS and the particle height by AFM (d(AFM)) approaches 1.0 for monodisperse samples and increases with sample polydispersity. A polydispersity index of 0.1 is suggested as a suitable limit above which DLS data can no longer be interpreted accurately. Conversion of the volume particle size distribution (PSD) by FlFFF-UV to the number PSD reduces the differences observed between the sizes measured by FlFFF (d(FlFFF)) and AFM. The remaining differences in the measured sizes can be attributed to particle structure (sphericity and permeability). The ratio d(FlFFF)/d(AFM) approaches 1 for small ion-coated NMs, which can be described as hard spheres, whereas d(FlFFF)/d(AFM) deviates from 1 for polymer-coated NMs, indicating that these particles are permeable, nonspherical, or both. These findings improve our understanding of the rather scattered data on NM size measurements reported in the environmental and nano(eco)toxicology literature and provide a tool for comparison of the measured sizes by different techniques.

  18. An analysis of Apollo lunar soil samples 12070,889, 12030,187, and 12070,891: Basaltic diversity at the Apollo 12 landing site and implications for classification of small-sized lunar samples

    NASA Astrophysics Data System (ADS)

    Alexander, Louise; Snape, Joshua F.; Joy, Katherine H.; Downes, Hilary; Crawford, Ian A.

    2016-07-01

    Lunar mare basalts provide insights into the compositional diversity of the Moon's interior. Basalt fragments from the lunar regolith can potentially sample lava flows from regions of the Moon not previously visited, thus, increasing our understanding of lunar geological evolution. As part of a study of basaltic diversity at the Apollo 12 landing site, detailed petrological and geochemical data are provided here for 13 basaltic chips. In addition to bulk chemistry, we have analyzed the major, minor, and trace element chemistry of mineral phases which highlight differences between basalt groups. Where samples contain olivine, the equilibrium parent melt magnesium number (Mg#; atomic Mg/[Mg + Fe]) can be calculated to estimate parent melt composition. Ilmenite and plagioclase chemistry can also determine differences between basalt groups. We conclude that samples of approximately 1-2 mm in size can be categorized provided that appropriate mineral phases (olivine, plagioclase, and ilmenite) are present. Where samples are fine-grained (grain size <0.3 mm), a "paired samples t-test" can provide a statistical comparison between a particular sample and known lunar basalts. Of the fragments analyzed here, three are found to belong to each of the previously identified olivine and ilmenite basalt suites, four to the pigeonite basalt suite, one is an olivine cumulate, and two could not be categorized because of their coarse grain sizes and lack of appropriate mineral phases. Our approach introduces methods that can be used to investigate small sample sizes (i.e., fines) from future sample return missions to investigate lava flow diversity and petrological significance.

  19. An analysis of Apollo lunar soil samples 12070,889, 12030,187, and 12070,891: Basaltic diversity at the Apollo 12 landing site and implications for classification of small-sized lunar samples

    NASA Astrophysics Data System (ADS)

    Alexander, Louise; Snape, Joshua F.; Joy, Katherine H.; Downes, Hilary; Crawford, Ian A.

    2016-09-01

    Lunar mare basalts provide insights into the compositional diversity of the Moon's interior. Basalt fragments from the lunar regolith can potentially sample lava flows from regions of the Moon not previously visited, thus, increasing our understanding of lunar geological evolution. As part of a study of basaltic diversity at the Apollo 12 landing site, detailed petrological and geochemical data are provided here for 13 basaltic chips. In addition to bulk chemistry, we have analyzed the major, minor, and trace element chemistry of mineral phases which highlight differences between basalt groups. Where samples contain olivine, the equilibrium parent melt magnesium number (Mg#; atomic Mg/[Mg + Fe]) can be calculated to estimate parent melt composition. Ilmenite and plagioclase chemistry can also determine differences between basalt groups. We conclude that samples of approximately 1-2 mm in size can be categorized provided that appropriate mineral phases (olivine, plagioclase, and ilmenite) are present. Where samples are fine-grained (grain size <0.3 mm), a "paired samples t-test" can provide a statistical comparison between a particular sample and known lunar basalts. Of the fragments analyzed here, three are found to belong to each of the previously identified olivine and ilmenite basalt suites, four to the pigeonite basalt suite, one is an olivine cumulate, and two could not be categorized because of their coarse grain sizes and lack of appropriate mineral phases. Our approach introduces methods that can be used to investigate small sample sizes (i.e., fines) from future sample return missions to investigate lava flow diversity and petrological significance.

  20. Evidence for a Global Sampling Process in Extraction of Summary Statistics of Item Sizes in a Set.

    PubMed

    Tokita, Midori; Ueda, Sachiyo; Ishiguchi, Akira

    2016-01-01

    Several studies have shown that our visual system may construct a "summary statistical representation" over groups of visual objects. Although there is a general understanding that human observers can accurately represent sets of a variety of features, many questions on how summary statistics, such as an average, are computed remain unanswered. This study investigated sampling properties of visual information used by human observers to extract two types of summary statistics of item sets, average and variance. We presented three models of ideal observers to extract the summary statistics: a global sampling model without sampling noise, global sampling model with sampling noise, and limited sampling model. We compared the performance of an ideal observer of each model with that of human observers using statistical efficiency analysis. Results suggest that summary statistics of items in a set may be computed without representing individual items, which makes it possible to discard the limited sampling account. Moreover, the extraction of summary statistics may not necessarily require the representation of individual objects with focused attention when the sets of items are larger than 4.

  1. Is the ANOVA F-Test Robust to Variance Heterogeneity When Sample Sizes are Equal?: An Investigation via a Coefficient of Variation

    ERIC Educational Resources Information Center

    Rogan, Joanne C.; Keselman, H. J.

    1977-01-01

    The effects of variance heterogeneity on the empirical probability of a Type I error for the analysis of variance (ANOVA) F-test are examined. The rate of Type I error varies as a function of the degree of variance heterogeneity, and the ANOVA F-test is not always robust to variance heterogeneity when sample sizes are equal. (Author/JAC)

  2. From Planning to Implementation: An Examination of Changes in the Research Design, Sample Size, and Precision of Group Randomized Trials Launched by the Institute of Education Sciences

    ERIC Educational Resources Information Center

    Spybrook, Jessaca; Puente, Anne Cullen; Lininger, Monica

    2013-01-01

    This article examines changes in the research design, sample size, and precision between the planning phase and implementation phase of group randomized trials (GRTs) funded by the Institute of Education Sciences. Thirty-eight GRTs funded between 2002 and 2006 were examined. Three studies revealed changes in the experimental design. Ten studies…

  3. Evidence from a Large Sample on the Effects of Group Size and Decision-Making Time on Performance in a Marketing Simulation Game

    ERIC Educational Resources Information Center

    Treen, Emily; Atanasova, Christina; Pitt, Leyland; Johnson, Michael

    2016-01-01

    Marketing instructors using simulation games as a way of inducing some realism into a marketing course are faced with many dilemmas. Two important quandaries are the optimal size of groups and how much of the students' time should ideally be devoted to the game. Using evidence from a very large sample of teams playing a simulation game, the study…

  4. Power and Sample Size for the Root Mean Square Error of Approximation Test of Not Close Fit in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Hancock, Gregory R.; Freeman, Mara J.

    2001-01-01

    Provides select power and sample size tables and interpolation strategies associated with the root mean square error of approximation test of not close fit under standard assumed conditions. The goal is to inform researchers conducting structural equation modeling about power limitations when testing a model. (SLD)

  5. Application of particle size distributions to total particulate stack samples to estimate PM2.5 and PM10 emission factors for agricultural sources

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Particle size distributions (PSD) have long been used to more accurately estimate the PM10 fraction of total particulate matter (PM) stack samples taken from agricultural sources. These PSD analyses were typically conducted using a Coulter Counter with 50 micrometer aperture tube. With recent increa...

  6. Diet- and Body Size-related Attitudes and Behaviors Associated with Vitamin Supplement Use in a Representative Sample of Fourth-grade Students in Texas

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The objective of this research was to examine diet- and body size-related attitudes and behaviors associated with supplement use in a representative sample of fourth-grade students in Texas. The research design consisted of cross-sectional data from the School Physical Activity and Nutrition study, ...

  7. Corpus Callosum Size, Reaction Time Speed and Variability in Mild Cognitive Disorders and in a Normative Sample

    ERIC Educational Resources Information Center

    Anstey, Kaarin J.; Mack, Holly A.; Christensen, Helen; Li, Shu-Chen; Reglade-Meslin, Chantal; Maller, Jerome; Kumar, Rajeev; Dear, Keith; Easteal, Simon; Sachdev, Perminder

    2007-01-01

    Intra-individual variability in reaction time increases with age and with neurological disorders, but the neural correlates of this increased variability remain uncertain. We hypothesized that both faster mean reaction time (RT) and less intra-individual RT variability would be associated with larger corpus callosum (CC) size in older adults, and…

  8. STREAMBED PARTICLE SIZE FROM PEBBLE COUNTS USING VISUALLY ESTIMATED SIZE CLSASES: JUNK OR USEFUL DATA?

    EPA Science Inventory

    In large-scale studies, it is often neither feasible nor necessary to obtain the large samples of 400 particles advocated by many geomorphologists to adequately quantify streambed surface particle-size distributions. Synoptic surveys such as U.S. Environmental Protection Agency...

  9. In vitro inflammatory and cytotoxic effects of size-segregated particulate samples collected during long-range transport of wildfire smoke to Helsinki

    SciTech Connect

    Jalava, Pasi I. . E-mail: Pasi.Jalava@ktl.fi; Salonen, Raimo O.; Haelinen, Arja I.; Penttinen, Piia; Pennanen, Arto S.; Sillanpaeae, Markus; Sandell, Erik; Hillamo, Risto; Hirvonen, Maija-Riitta

    2006-09-15

    The impact of long-range transport (LRT) episodes of wildfire smoke on the inflammogenic and cytotoxic activity of urban air particles was investigated in the mouse RAW 264.7 macrophages. The particles were sampled in four size ranges using a modified Harvard high-volume cascade impactor, and the samples were chemically characterized for identification of different emission sources. The particulate mass concentration in the accumulation size range (PM{sub 1-0.2}) was highly increased during two LRT episodes, but the contents of total and genotoxic polycyclic aromatic hydrocarbons (PAH) in collected particulate samples were only 10-25% of those in the seasonal average sample. The ability of coarse (PM{sub 10-2.5}), intermodal size range (PM{sub 2.5-1}), PM{sub 1-0.2} and ultrafine (PM{sub 0.2}) particles to cause cytokine production (TNF{alpha}, IL-6, MIP-2) reduced along with smaller particle size, but the size range had a much smaller impact on induced nitric oxide (NO) production and cytotoxicity or apoptosis. The aerosol particles collected during LRT episodes had a substantially lower activity in cytokine production than the corresponding particles of the seasonal average period, which is suggested to be due to chemical transformation of the organic fraction during aging. However, the episode events were associated with enhanced inflammogenic and cytotoxic activities per inhaled cubic meter of air due to the greatly increased particulate mass concentration in the accumulation size range, which may have public health implications.

  10. A microfluidic platform for precision small-volume sample processing and its use to size separate biological particles with an acoustic microdevice

    SciTech Connect

    Fong, Erika J.; Huang, Chao; Hamilton, Julie; Benett, William J.; Bora, Mihail; Burklund, Alison; Metz, Thomas R.; Shusteff, Maxim

    2015-11-23

    Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection, system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.

  11. A microfluidic platform for precision small-volume sample processing and its use to size separate biological particles with an acoustic microdevice

    DOE PAGES

    Fong, Erika J.; Huang, Chao; Hamilton, Julie; Benett, William J.; Bora, Mihail; Burklund, Alison; Metz, Thomas R.; Shusteff, Maxim

    2015-11-23

    Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection,more » system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.« less

  12. A Microfluidic Platform for Precision Small-volume Sample Processing and Its Use to Size Separate Biological Particles with an Acoustic Microdevice

    PubMed Central

    Fong, Erika J.; Huang, Chao; Hamilton, Julie; Benett, William J.; Bora, Mihail; Burklund, Alison; Metz, Thomas R.; Shusteff, Maxim

    2015-01-01

    A major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection, system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing. PMID:26651055

  13. Levitation force between a small magnet and a superconducting sample of finite size in the Meissner state

    NASA Astrophysics Data System (ADS)

    Lugo, Jorge; Sosa, Victor

    1999-10-01

    The repulsion force between a cylindrical superconductor in the Meissner state and a small permanent magnet was calculated under the assumption that the superconductor was formed by a continuous array of dipoles distributed in the finite volume of the sample. After summing up the dipole-dipole interactions with the magnet, we obtained analytical expressions for the levitation force as a function of the superconductor-magnet distance, radius and thickness of the sample. We analyzed two configurations, with the magnet in a horizontal or vertical orientation.

  14. 40 CFR 51.354 - Adequate tools and resources.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 2 2013-07-01 2013-07-01 false Adequate tools and resources. 51.354... Requirements § 51.354 Adequate tools and resources. (a) Administrative resources. The program shall maintain the administrative resources necessary to perform all of the program functions including...

  15. 40 CFR 51.354 - Adequate tools and resources.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 2 2014-07-01 2014-07-01 false Adequate tools and resources. 51.354... Requirements § 51.354 Adequate tools and resources. (a) Administrative resources. The program shall maintain the administrative resources necessary to perform all of the program functions including...

  16. 40 CFR 51.354 - Adequate tools and resources.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 2 2012-07-01 2012-07-01 false Adequate tools and resources. 51.354... Requirements § 51.354 Adequate tools and resources. (a) Administrative resources. The program shall maintain the administrative resources necessary to perform all of the program functions including...

  17. 10 CFR 1304.114 - Responsibility for maintaining adequate safeguards.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Responsibility for maintaining adequate safeguards. 1304.114 Section 1304.114 Energy NUCLEAR WASTE TECHNICAL REVIEW BOARD PRIVACY ACT OF 1974 § 1304.114 Responsibility for maintaining adequate safeguards. The Board has the responsibility for maintaining...

  18. 13 CFR 108.200 - Adequate capital for NMVC Companies.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... VENTURE CAPITAL (âNMVCâ) PROGRAM Qualifications for the NMVC Program Capitalizing A Nmvc Company § 108.200 Adequate capital for NMVC Companies. You must meet the requirements of §§ 108.200-108.230 in order to... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Adequate capital for...

  19. 34 CFR 200.20 - Making adequate yearly progress.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 34 Education 1 2012-07-01 2012-07-01 false Making adequate yearly progress. 200.20 Section 200.20... Basic Programs Operated by Local Educational Agencies Adequate Yearly Progress (ayp) § 200.20 Making... State data system; (vi) Include, as separate factors in determining whether schools are making AYP for...

  20. 34 CFR 200.20 - Making adequate yearly progress.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 34 Education 1 2013-07-01 2013-07-01 false Making adequate yearly progress. 200.20 Section 200.20... Basic Programs Operated by Local Educational Agencies Adequate Yearly Progress (ayp) § 200.20 Making... State data system; (vi) Include, as separate factors in determining whether schools are making AYP for...

  1. 34 CFR 200.20 - Making adequate yearly progress.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false Making adequate yearly progress. 200.20 Section 200.20... Basic Programs Operated by Local Educational Agencies Adequate Yearly Progress (ayp) § 200.20 Making... State data system; (vi) Include, as separate factors in determining whether schools are making AYP for...

  2. 34 CFR 200.20 - Making adequate yearly progress.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 34 Education 1 2014-07-01 2014-07-01 false Making adequate yearly progress. 200.20 Section 200.20... Basic Programs Operated by Local Educational Agencies Adequate Yearly Progress (ayp) § 200.20 Making... State data system; (vi) Include, as separate factors in determining whether schools are making AYP for...

  3. 34 CFR 200.20 - Making adequate yearly progress.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 1 2011-07-01 2011-07-01 false Making adequate yearly progress. 200.20 Section 200.20... Basic Programs Operated by Local Educational Agencies Adequate Yearly Progress (ayp) § 200.20 Making... State data system; (vi) Include, as separate factors in determining whether schools are making AYP for...

  4. 40 CFR 716.25 - Adequate file search.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Adequate file search. 716.25 Section... ACT HEALTH AND SAFETY DATA REPORTING General Provisions § 716.25 Adequate file search. The scope of a person's responsibility to search records is limited to records in the location(s) where the...

  5. 40 CFR 716.25 - Adequate file search.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 32 2013-07-01 2013-07-01 false Adequate file search. 716.25 Section... ACT HEALTH AND SAFETY DATA REPORTING General Provisions § 716.25 Adequate file search. The scope of a person's responsibility to search records is limited to records in the location(s) where the...

  6. 40 CFR 716.25 - Adequate file search.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 31 2014-07-01 2014-07-01 false Adequate file search. 716.25 Section... ACT HEALTH AND SAFETY DATA REPORTING General Provisions § 716.25 Adequate file search. The scope of a person's responsibility to search records is limited to records in the location(s) where the...

  7. 40 CFR 716.25 - Adequate file search.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 32 2012-07-01 2012-07-01 false Adequate file search. 716.25 Section... ACT HEALTH AND SAFETY DATA REPORTING General Provisions § 716.25 Adequate file search. The scope of a person's responsibility to search records is limited to records in the location(s) where the...

  8. 40 CFR 716.25 - Adequate file search.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Adequate file search. 716.25 Section... ACT HEALTH AND SAFETY DATA REPORTING General Provisions § 716.25 Adequate file search. The scope of a person's responsibility to search records is limited to records in the location(s) where the...

  9. 9 CFR 305.3 - Sanitation and adequate facilities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Sanitation and adequate facilities. 305.3 Section 305.3 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF... OF VIOLATION § 305.3 Sanitation and adequate facilities. Inspection shall not be inaugurated if...

  10. 9 CFR 305.3 - Sanitation and adequate facilities.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Sanitation and adequate facilities. 305.3 Section 305.3 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF... OF VIOLATION § 305.3 Sanitation and adequate facilities. Inspection shall not be inaugurated if...

  11. 40 CFR 51.354 - Adequate tools and resources.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 2 2011-07-01 2011-07-01 false Adequate tools and resources. 51.354... Requirements § 51.354 Adequate tools and resources. (a) Administrative resources. The program shall maintain the administrative resources necessary to perform all of the program functions including...

  12. 40 CFR 51.354 - Adequate tools and resources.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 2 2010-07-01 2010-07-01 false Adequate tools and resources. 51.354... Requirements § 51.354 Adequate tools and resources. (a) Administrative resources. The program shall maintain the administrative resources necessary to perform all of the program functions including...

  13. 10 CFR 1304.114 - Responsibility for maintaining adequate safeguards.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Responsibility for maintaining adequate safeguards. 1304.114 Section 1304.114 Energy NUCLEAR WASTE TECHNICAL REVIEW BOARD PRIVACY ACT OF 1974 § 1304.114 Responsibility for maintaining adequate safeguards. The Board has the responsibility for maintaining...

  14. 10 CFR 1304.114 - Responsibility for maintaining adequate safeguards.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Responsibility for maintaining adequate safeguards. 1304.114 Section 1304.114 Energy NUCLEAR WASTE TECHNICAL REVIEW BOARD PRIVACY ACT OF 1974 § 1304.114 Responsibility for maintaining adequate safeguards. The Board has the responsibility for maintaining...

  15. 10 CFR 1304.114 - Responsibility for maintaining adequate safeguards.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 4 2013-01-01 2013-01-01 false Responsibility for maintaining adequate safeguards. 1304.114 Section 1304.114 Energy NUCLEAR WASTE TECHNICAL REVIEW BOARD PRIVACY ACT OF 1974 § 1304.114 Responsibility for maintaining adequate safeguards. The Board has the responsibility for maintaining...

  16. 10 CFR 1304.114 - Responsibility for maintaining adequate safeguards.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Responsibility for maintaining adequate safeguards. 1304.114 Section 1304.114 Energy NUCLEAR WASTE TECHNICAL REVIEW BOARD PRIVACY ACT OF 1974 § 1304.114 Responsibility for maintaining adequate safeguards. The Board has the responsibility for maintaining...

  17. 13 CFR 107.200 - Adequate capital for Licensees.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Adequate capital for Licensees. 107.200 Section 107.200 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION SMALL BUSINESS INVESTMENT COMPANIES Qualifying for an SBIC License Capitalizing An Sbic § 107.200 Adequate capital...

  18. 21 CFR 201.5 - Drugs; adequate directions for use.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 4 2010-04-01 2010-04-01 false Drugs; adequate directions for use. 201.5 Section 201.5 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) DRUGS: GENERAL LABELING General Labeling Provisions § 201.5 Drugs; adequate directions for use....

  19. 21 CFR 201.5 - Drugs; adequate directions for use.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 4 2011-04-01 2011-04-01 false Drugs; adequate directions for use. 201.5 Section 201.5 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) DRUGS: GENERAL LABELING General Labeling Provisions § 201.5 Drugs; adequate directions for use....

  20. 7 CFR 4290.200 - Adequate capital for RBICs.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Adequate capital for RBICs. 4290.200 Section 4290.200 Agriculture Regulations of the Department of Agriculture (Continued) RURAL BUSINESS-COOPERATIVE SERVICE AND... Qualifications for the RBIC Program Capitalizing A Rbic § 4290.200 Adequate capital for RBICs. You must meet...