Sample records for sample sizes results

  1. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    PubMed

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  2. Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology

    PubMed Central

    Vavrek, Matthew J.

    2015-01-01

    Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770

  3. 40 CFR 90.706 - Engine sample selection.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... = emission test result for an individual engine. x = mean of emission test results of the actual sample. FEL... test with the last test result from the previous model year and then calculate the required sample size.... Test results used to calculate the variables in the following Sample Size Equation must be final...

  4. Reexamining Sample Size Requirements for Multivariate, Abundance-Based Community Research: When Resources are Limited, the Research Does Not Have to Be.

    PubMed

    Forcino, Frank L; Leighton, Lindsey R; Twerdy, Pamela; Cahill, James F

    2015-01-01

    Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment.

  5. Sample size determination for mediation analysis of longitudinal data.

    PubMed

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  6. The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.

    PubMed

    Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S

    2016-10-01

    The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.

  7. The cost of large numbers of hypothesis tests on power, effect size and sample size.

    PubMed

    Lazzeroni, L C; Ray, A

    2012-01-01

    Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing.

  8. Analysis of Sample Size, Counting Time, and Plot Size from an Avian Point Count Survey on Hoosier National Forest, Indiana

    Treesearch

    Frank R. Thompson; Monica J. Schwalbach

    1995-01-01

    We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...

  9. The large sample size fallacy.

    PubMed

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  10. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    PubMed Central

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  11. Frictional behaviour of sandstone: A sample-size dependent triaxial investigation

    NASA Astrophysics Data System (ADS)

    Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus

    2017-01-01

    Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.

  12. Improving the accuracy of livestock distribution estimates through spatial interpolation.

    PubMed

    Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy

    2012-11-01

    Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.

  13. An audit of the statistics and the comparison with the parameter in the population

    NASA Astrophysics Data System (ADS)

    Bujang, Mohamad Adam; Sa'at, Nadiah; Joys, A. Reena; Ali, Mariana Mohamad

    2015-10-01

    The sufficient sample size that is needed to closely estimate the statistics for particular parameters are use to be an issue. Although sample size might had been calculated referring to objective of the study, however, it is difficult to confirm whether the statistics are closed with the parameter for a particular population. All these while, guideline that uses a p-value less than 0.05 is widely used as inferential evidence. Therefore, this study had audited results that were analyzed from various sub sample and statistical analyses and had compared the results with the parameters in three different populations. Eight types of statistical analysis and eight sub samples for each statistical analysis were analyzed. Results found that the statistics were consistent and were closed to the parameters when the sample study covered at least 15% to 35% of population. Larger sample size is needed to estimate parameter that involve with categorical variables compared with numerical variables. Sample sizes with 300 to 500 are sufficient to estimate the parameters for medium size of population.

  14. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    PubMed

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  15. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    PubMed

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.

  16. Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.

    PubMed

    Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto

    2007-07-01

    To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.

  17. Damage Accumulation in Silica Glass Nanofibers.

    PubMed

    Bonfanti, Silvia; Ferrero, Ezequiel E; Sellerio, Alessandro L; Guerra, Roberto; Zapperi, Stefano

    2018-06-06

    The origin of the brittle-to-ductile transition, experimentally observed in amorphous silica nanofibers as the sample size is reduced, is still debated. Here we investigate the issue by extensive molecular dynamics simulations at low and room temperatures for a broad range of sample sizes, with open and periodic boundary conditions. Our results show that small sample-size enhanced ductility is primarily due to diffuse damage accumulation, that for larger samples leads to brittle catastrophic failure. Surface effects such as boundary fluidization contribute to ductility at room temperature by promoting necking, but are not the main driver of the transition. Our results suggest that the experimentally observed size-induced ductility of silica nanofibers is a manifestation of finite-size criticality, as expected in general for quasi-brittle disordered networks.

  18. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Blinded sample size re-estimation in three-arm trials with 'gold standard' design.

    PubMed

    Mütze, Tobias; Friede, Tim

    2017-10-15

    In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  20. HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE

    NASA Technical Reports Server (NTRS)

    De, Salvo L. J.

    1994-01-01

    HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.

  1. Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.

    PubMed

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2007-05-01

    Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.

  2. Sample size and power for cost-effectiveness analysis (part 1).

    PubMed

    Glick, Henry A

    2011-03-01

    Basic sample size and power formulae for cost-effectiveness analysis have been established in the literature. These formulae are reviewed and the similarities and differences between sample size and power for cost-effectiveness analysis and for the analysis of other continuous variables such as changes in blood pressure or weight are described. The types of sample size and power tables that are commonly calculated for cost-effectiveness analysis are also described and the impact of varying the assumed parameter values on the resulting sample size and power estimates is discussed. Finally, the way in which the data for these calculations may be derived are discussed.

  3. Study samples are too small to produce sufficiently precise reliability coefficients.

    PubMed

    Charter, Richard A

    2003-04-01

    In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.

  4. Effects of sample size on estimates of population growth rates calculated with matrix models.

    PubMed

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  5. Effect of roll hot press temperature on crystallite size of PVDF film

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartono, Ambran, E-mail: ambranhartono@yahoo.com; Sanjaya, Edi; Djamal, Mitra

    2014-03-24

    Fabrication PVDF films have been made using Hot Roll Press. Preparation of samples carried out for nine different temperatures. This condition is carried out to see the effect of Roll Hot Press temperature on the size of the crystallite of PVDF films. To obtain the diffraction pattern of sample characterization is performed using X-Ray Diffraction. Furthermore, from the diffraction pattern is obtained, the calculation to determine the crystallite size of the sample by using the Scherrer equation. From the experimental results and the calculation of crystallite sizes obtained for the samples with temperature 130 °C up to 170 °C respectivelymore » increased from 7.2 nm up to 20.54 nm. These results show that increasing temperatures will also increase the size of the crystallite of the sample. This happens because with the increasing temperature causes the higher the degree of crystallization of PVDF film sample is formed, so that the crystallite size also increases. This condition indicates that the specific volume or size of the crystals depends on the magnitude of the temperature as it has been studied by Nakagawa.« less

  6. Particle size analysis of sediments, soils and related particulate materials for forensic purposes using laser granulometry.

    PubMed

    Pye, Kenneth; Blott, Simon J

    2004-08-11

    Particle size is a fundamental property of any sediment, soil or dust deposit which can provide important clues to nature and provenance. For forensic work, the particle size distribution of sometimes very small samples requires precise determination using a rapid and reliable method with a high resolution. The Coulter trade mark LS230 laser granulometer offers rapid and accurate sizing of particles in the range 0.04-2000 microm for a variety of sample types, including soils, unconsolidated sediments, dusts, powders and other particulate materials. Reliable results are possible for sample weights of just 50 mg. Discrimination between samples is performed on the basis of the shape of the particle size curves and statistical measures of the size distributions. In routine forensic work laser granulometry data can rarely be used in isolation and should be considered in combination with results from other techniques to reach an overall conclusion.

  7. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach

    NASA Technical Reports Server (NTRS)

    Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.

  8. Estimating the size of hidden populations using respondent-driven sampling data: Case examples from Morocco

    PubMed Central

    Johnston, Lisa G; McLaughlin, Katherine R; Rhilani, Houssine El; Latifi, Amina; Toufik, Abdalla; Bennani, Aziza; Alami, Kamal; Elomari, Boutaina; Handcock, Mark S

    2015-01-01

    Background Respondent-driven sampling is used worldwide to estimate the population prevalence of characteristics such as HIV/AIDS and associated risk factors in hard-to-reach populations. Estimating the total size of these populations is of great interest to national and international organizations, however reliable measures of population size often do not exist. Methods Successive Sampling-Population Size Estimation (SS-PSE) along with network size imputation allows population size estimates to be made without relying on separate studies or additional data (as in network scale-up, multiplier and capture-recapture methods), which may be biased. Results Ten population size estimates were calculated for people who inject drugs, female sex workers, men who have sex with other men, and migrants from sub-Sahara Africa in six different cities in Morocco. SS-PSE estimates fell within or very close to the likely values provided by experts and the estimates from previous studies using other methods. Conclusions SS-PSE is an effective method for estimating the size of hard-to-reach populations that leverages important information within respondent-driven sampling studies. The addition of a network size imputation method helps to smooth network sizes allowing for more accurate results. However, caution should be used particularly when there is reason to believe that clustered subgroups may exist within the population of interest or when the sample size is small in relation to the population. PMID:26258908

  9. [Practical aspects regarding sample size in clinical research].

    PubMed

    Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S

    1996-01-01

    The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.

  10. Rasch fit statistics and sample size considerations for polytomous data

    PubMed Central

    Smith, Adam B; Rush, Robert; Fallowfield, Lesley J; Velikova, Galina; Sharpe, Michael

    2008-01-01

    Background Previous research on educational data has demonstrated that Rasch fit statistics (mean squares and t-statistics) are highly susceptible to sample size variation for dichotomously scored rating data, although little is known about this relationship for polytomous data. These statistics help inform researchers about how well items fit to a unidimensional latent trait, and are an important adjunct to modern psychometrics. Given the increasing use of Rasch models in health research the purpose of this study was therefore to explore the relationship between fit statistics and sample size for polytomous data. Methods Data were collated from a heterogeneous sample of cancer patients (n = 4072) who had completed both the Patient Health Questionnaire – 9 and the Hospital Anxiety and Depression Scale. Ten samples were drawn with replacement for each of eight sample sizes (n = 25 to n = 3200). The Rating and Partial Credit Models were applied and the mean square and t-fit statistics (infit/outfit) derived for each model. Results The results demonstrated that t-statistics were highly sensitive to sample size, whereas mean square statistics remained relatively stable for polytomous data. Conclusion It was concluded that mean square statistics were relatively independent of sample size for polytomous data and that misfit to the model could be identified using published recommended ranges. PMID:18510722

  11. The impact of sample size on the reproducibility of voxel-based lesion-deficit mappings.

    PubMed

    Lorca-Puls, Diego L; Gajardo-Vidal, Andrea; White, Jitrachote; Seghier, Mohamed L; Leff, Alexander P; Green, David W; Crinion, Jenny T; Ludersdorfer, Philipp; Hope, Thomas M H; Bowman, Howard; Price, Cathy J

    2018-07-01

    This study investigated how sample size affects the reproducibility of findings from univariate voxel-based lesion-deficit analyses (e.g., voxel-based lesion-symptom mapping and voxel-based morphometry). Our effect of interest was the strength of the mapping between brain damage and speech articulation difficulties, as measured in terms of the proportion of variance explained. First, we identified a region of interest by searching on a voxel-by-voxel basis for brain areas where greater lesion load was associated with poorer speech articulation using a large sample of 360 right-handed English-speaking stroke survivors. We then randomly drew thousands of bootstrap samples from this data set that included either 30, 60, 90, 120, 180, or 360 patients. For each resample, we recorded effect size estimates and p values after conducting exactly the same lesion-deficit analysis within the previously identified region of interest and holding all procedures constant. The results show (1) how often small effect sizes in a heterogeneous population fail to be detected; (2) how effect size and its statistical significance varies with sample size; (3) how low-powered studies (due to small sample sizes) can greatly over-estimate as well as under-estimate effect sizes; and (4) how large sample sizes (N ≥ 90) can yield highly significant p values even when effect sizes are so small that they become trivial in practical terms. The implications of these findings for interpreting the results from univariate voxel-based lesion-deficit analyses are discussed. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  12. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    PubMed

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  13. Sample size calculations for randomized clinical trials published in anesthesiology journals: a comparison of 2010 versus 2016.

    PubMed

    Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M

    2018-06-01

    Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.

  14. Sample size calculation for a proof of concept study.

    PubMed

    Yin, Yin

    2002-05-01

    Sample size calculation is vital for a confirmatory clinical trial since the regulatory agencies require the probability of making Type I error to be significantly small, usually less than 0.05 or 0.025. However, the importance of the sample size calculation for studies conducted by a pharmaceutical company for internal decision making, e.g., a proof of concept (PoC) study, has not received enough attention. This article introduces a Bayesian method that identifies the information required for planning a PoC and the process of sample size calculation. The results will be presented in terms of the relationships between the regulatory requirements, the probability of reaching the regulatory requirements, the goalpost for PoC, and the sample size used for PoC.

  15. Biostatistics Series Module 5: Determining Sample Size

    PubMed Central

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437

  16. Mass spectra features of biomass burning boiler and coal burning boiler emitted particles by single particle aerosol mass spectrometer.

    PubMed

    Xu, Jiao; Li, Mei; Shi, Guoliang; Wang, Haiting; Ma, Xian; Wu, Jianhui; Shi, Xurong; Feng, Yinchang

    2017-11-15

    In this study, single particle mass spectra signatures of both coal burning boiler and biomass burning boiler emitted particles were studied. Particle samples were suspended in clean Resuspension Chamber, and analyzed by ELPI and SPAMS simultaneously. The size distribution of BBB (biomass burning boiler sample) and CBB (coal burning boiler sample) are different, as BBB peaks at smaller size, and CBB peaks at larger size. Mass spectra signatures of two samples were studied by analyzing the average mass spectrum of each particle cluster extracted by ART-2a in different size ranges. In conclusion, BBB sample mostly consists of OC and EC containing particles, and a small fraction of K-rich particles in the size range of 0.2-0.5μm. In 0.5-1.0μm, BBB sample consists of EC, OC, K-rich and Al_Silicate containing particles; CBB sample consists of EC, ECOC containing particles, while Al_Silicate (including Al_Ca_Ti_Silicate, Al_Ti_Silicate, Al_Silicate) containing particles got higher fractions as size increase. The similarity of single particle mass spectrum signatures between two samples were studied by analyzing the dot product, results indicated that part of the single particle mass spectra of two samples in the same size range are similar, which bring challenge to the future source apportionment activity by using single particle aerosol mass spectrometer. Results of this study will provide physicochemical information of important sources which contribute to particle pollution, and will support source apportionment activities. Copyright © 2017. Published by Elsevier B.V.

  17. [Experimental study on particle size distributions of an engine fueled with blends of biodiesel].

    PubMed

    Lu, Xiao-Ming; Ge, Yun-Shan; Han, Xiu-Kun; Wu, Si-Jin; Zhu, Rong-Fu; He, Chao

    2007-04-01

    The purpose of this study is to obtain the particle size distributions of an engine fueled biodiesel and its blends. A turbocharged DI diesel engine was tested on a dynamometer. A pump of 80 L/min and fiber glass filters with diameter of 90 mm were used to sample engine particles in exhaust pipe. Sampling duration was 10 minutes. Particle size distributions were measured by a laser diffraction particle size analyzer. Results indicated that higher engine speed resulted in smaller particle sizes and narrower distributions. The modes on distribution curves and mode variation were larger with dry samples than with wet samples (dry: around 10 - 12 microm vs. wet: around 4 - 10 microm). At low speed, Sauter mean diameter d32 of dry samples was the biggest with B100, the smallest with diesel fuel, and among them with B20, while at high speed, d32 the biggest with B20, the smallest with B100, and in middle with diesel. Median diameter d(0.5) also reflected the results. Except for 2 000 r/min, d32 of wet with B20 is the biggest, the smallest with diesel, and in middle with B100. The large mode variation resulted in increase of d32.

  18. Using known populations of pronghorn to evaluate sampling plans and estimators

    USGS Publications Warehouse

    Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.

    1995-01-01

    Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.

  19. An improved methodology of asymmetric flow field flow fractionation hyphenated with inductively coupled mass spectrometry for the determination of size distribution of gold nanoparticles in dietary supplements.

    PubMed

    Mudalige, Thilak K; Qu, Haiou; Linder, Sean W

    2015-11-13

    Engineered nanoparticles are available in large numbers of commercial products claiming various health benefits. Nanoparticle absorption, distribution, metabolism, excretion, and toxicity in a biological system are dependent on particle size, thus the determination of size and size distribution is essential for full characterization. Number based average size and size distribution is a major parameter for full characterization of the nanoparticle. In the case of polydispersed samples, large numbers of particles are needed to obtain accurate size distribution data. Herein, we report a rapid methodology, demonstrating improved nanoparticle recovery and excellent size resolution, for the characterization of gold nanoparticles in dietary supplements using asymmetric flow field flow fractionation coupled with visible absorption spectrometry and inductively coupled plasma mass spectrometry. A linear relationship between gold nanoparticle size and retention times was observed, and used for characterization of unknown samples. The particle size results from unknown samples were compared to results from traditional size analysis by transmission electron microscopy, and found to have less than a 5% deviation in size for unknown product over the size range from 7 to 30 nm. Published by Elsevier B.V.

  20. The effect of grain size and cement content on index properties of weakly solidified artificial sandstones

    NASA Astrophysics Data System (ADS)

    Atapour, Hadi; Mortazavi, Ali

    2018-04-01

    The effects of textural characteristics, especially grain size, on index properties of weakly solidified artificial sandstones are studied. For this purpose, a relatively large number of laboratory tests were carried out on artificial sandstones that were produced in the laboratory. The prepared samples represent fifteen sandstone types consisting of five different median grain sizes and three different cement contents. Indices rock properties including effective porosity, bulk density, point load strength index, and Schmidt hammer values (SHVs) were determined. Experimental results showed that the grain size has significant effects on index properties of weakly solidified sandstones. The porosity of samples is inversely related to the grain size and decreases linearly as grain size increases. While a direct relationship was observed between grain size and dry bulk density, as bulk density increased with increasing median grain size. Furthermore, it was observed that the point load strength index and SHV of samples increased as a result of grain size increase. These observations are indirectly related to the porosity decrease as a function of median grain size.

  1. Field test comparison of an autocorrelation technique for determining grain size using a digital 'beachball' camera versus traditional methods

    USGS Publications Warehouse

    Barnard, P.L.; Rubin, D.M.; Harney, J.; Mustain, N.

    2007-01-01

    This extensive field test of an autocorrelation technique for determining grain size from digital images was conducted using a digital bed-sediment camera, or 'beachball' camera. Using 205 sediment samples and >1200 images from a variety of beaches on the west coast of the US, grain size ranging from sand to granules was measured from field samples using both the autocorrelation technique developed by Rubin [Rubin, D.M., 2004. A simple autocorrelation algorithm for determining grain size from digital images of sediment. Journal of Sedimentary Research, 74(1): 160-165.] and traditional methods (i.e. settling tube analysis, sieving, and point counts). To test the accuracy of the digital-image grain size algorithm, we compared results with manual point counts of an extensive image data set in the Santa Barbara littoral cell. Grain sizes calculated using the autocorrelation algorithm were highly correlated with the point counts of the same images (r2 = 0.93; n = 79) and had an error of only 1%. Comparisons of calculated grain sizes and grain sizes measured from grab samples demonstrated that the autocorrelation technique works well on high-energy dissipative beaches with well-sorted sediment such as in the Pacific Northwest (r2 ??? 0.92; n = 115). On less dissipative, more poorly sorted beaches such as Ocean Beach in San Francisco, results were not as good (r2 ??? 0.70; n = 67; within 3% accuracy). Because the algorithm works well compared with point counts of the same image, the poorer correlation with grab samples must be a result of actual spatial and vertical variability of sediment in the field; closer agreement between grain size in the images and grain size of grab samples can be achieved by increasing the sampling volume of the images (taking more images, distributed over a volume comparable to that of a grab sample). In all field tests the autocorrelation method was able to predict the mean and median grain size with ???96% accuracy, which is more than adequate for the majority of sedimentological applications, especially considering that the autocorrelation technique is estimated to be at least 100 times faster than traditional methods.

  2. Sampling guidelines for oral fluid-based surveys of group-housed animals.

    PubMed

    Rotolo, Marisa L; Sun, Yaxuan; Wang, Chong; Giménez-Lirola, Luis; Baum, David H; Gauger, Phillip C; Harmon, Karen M; Hoogland, Marlin; Main, Rodger; Zimmerman, Jeffrey J

    2017-09-01

    Formulas and software for calculating sample size for surveys based on individual animal samples are readily available. However, sample size formulas are not available for oral fluids and other aggregate samples that are increasingly used in production settings. Therefore, the objective of this study was to develop sampling guidelines for oral fluid-based porcine reproductive and respiratory syndrome virus (PRRSV) surveys in commercial swine farms. Oral fluid samples were collected in 9 weekly samplings from all pens in 3 barns on one production site beginning shortly after placement of weaned pigs. Samples (n=972) were tested by real-time reverse-transcription PCR (RT-rtPCR) and the binary results analyzed using a piecewise exponential survival model for interval-censored, time-to-event data with misclassification. Thereafter, simulation studies were used to study the barn-level probability of PRRSV detection as a function of sample size, sample allocation (simple random sampling vs fixed spatial sampling), assay diagnostic sensitivity and specificity, and pen-level prevalence. These studies provided estimates of the probability of detection by sample size and within-barn prevalence. Detection using fixed spatial sampling was as good as, or better than, simple random sampling. Sampling multiple barns on a site increased the probability of detection with the number of barns sampled. These results are relevant to PRRSV control or elimination projects at the herd, regional, or national levels, but the results are also broadly applicable to contagious pathogens of swine for which oral fluid tests of equivalent performance are available. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Role of Sample Processing Strategies at the European Union National Reference Laboratories (NRLs) Concerning the Analysis of Pesticide Residues.

    PubMed

    Hajeb, Parvaneh; Herrmann, Susan S; Poulsen, Mette E

    2017-07-19

    The guidance document SANTE 11945/2015 recommends that cereal samples be milled to a particle size preferably smaller than 1.0 mm and that extensive heating of the samples should be avoided. The aim of the present study was therefore to investigate the differences in milling procedures, obtained particle size distributions, and the resulting pesticide residue recovery when cereal samples were milled at the European Union National Reference Laboratories (NRLs) with their routine milling procedures. A total of 23 NRLs participated in the study. The oat and rye samples milled by each NRL were sent to the European Union Reference Laboratory on Cereals and Feedingstuff (EURL) for the determination of the particle size distribution and pesticide residue recovery. The results showed that the NRLs used several different brands and types of mills. Large variations in the particle size distributions and pesticide extraction efficiencies were observed even between samples milled by the same type of mill.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jomekian, A.; Faculty of Chemical Engineering, Iran University of Science and Technology; Behbahani, R.M., E-mail: behbahani@put.ac.ir

    Ultra porous ZIF-8 particles synthesized using PEO/PA6 based poly(ether-block-amide) (Pebax 1657) as structure directing agent. Structural properties of ZIF-8 samples prepared under different synthesis parameters were investigated by laser particle size analysis, XRD, N{sub 2} adsorption analysis, BJH and BET tests. The overall results showed that: (1) The mean pore size of all ZIF-8 samples increased remarkably (from 0.34 nm to 1.1–2.5 nm) compared to conventionally synthesized ZIF-8 samples. (2) Exceptional BET surface area of 1869 m{sup 2}/g was obtained for a ZIF-8 sample with mean pore size of 2.5 nm. (3) Applying high concentrations of Pebax 1657 to themore » synthesis solution lead to higher surface area, larger pore size and smaller particle size for ZIF-8 samples. (4) Both, Increase in temperature and decrease in molar ratio of MeIM/Zn{sup 2+} had increasing effect on ZIF-8 particle size, pore size, pore volume, crystallinity and BET surface area of all investigated samples. - Highlights: • The pore size of ZIF-8 samples synthesized with Pebax 1657 increased remarkably. • The BET surface area of 1869 m{sup 2}/gr obtained for a ZIF-8 synthesized sample with Pebax. • Increase in temperature had increasing effect on textural properties of ZIF-8 samples. • Decrease in MeIM/Zn{sup 2+} had increasing effect on textural properties of ZIF-8 samples.« less

  5. Extraction of citral oil from lemongrass (Cymbopogon Citratus) by steam-water distillation technique

    NASA Astrophysics Data System (ADS)

    Alam, P. N.; Husin, H.; Asnawi, T. M.; Adisalamun

    2018-04-01

    In Indonesia, production of citral oil from lemon grass (Cymbopogon Cytratus) is done by a traditional technique whereby a low yield results. To improve the yield, an appropriate extraction technology is required. In this research, a steam-water distillation technique was applied to extract the essential oil from the lemongrass. The effects of sample particle size and bed volume on yield and quality of citral oil produced were investigated. The drying and refining time of 2 hours were used as fixed variables. This research results that minimum citral oil yield of 0.53% was obtained on sample particle size of 3 cm and bed volume of 80%, whereas the maximum yield of 1.95% on sample particle size of 15 cm and bed volume of 40%. The lowest specific gravity of 0.80 and the highest specific gravity of 0.905 were obtained on sample particle size of 8 cm with bed volume of 80% and particle size of 12 cm with bed volume of 70%, respectively. The lowest refractive index of 1.480 and the highest refractive index of 1.495 were obtained on sample particle size of 8 cm with bed volume of 70% and sample particle size of 15 cm with bed volume of 40%, respectively. The solubility of the produced citral oil in alcohol was 70% in ratio of 1:1, and the citral oil concentration obtained was around 79%.

  6. Composition and Morphology of Major Particle Types from Airborne Measurements during ICE-T and PRADACS Field Studies

    NASA Astrophysics Data System (ADS)

    Venero, I. M.; Mayol-Bracero, O. L.; Anderson, J. R.

    2012-12-01

    As part of the Puerto Rican African Dust and Cloud Study (PRADACS) and the Ice in Clouds Experiment - Tropical (ICE-T), we sampled giant airborne particles to study their elemental composition, morphology, and size distributions. Samples were collected in July 2011 during field measurements performed by NCAR's C-130 aircraft based on St Croix, U.S Virgin Island. The results presented here correspond to the measurements done during research flight #8 (RF8). Aerosol particles with Dp > 1 um were sampled with the Giant Nuclei Impactor and particles with Dp < 1 um were collected with the Wyoming Inlet. Collected particles were later analyzed using an automated scanning electron microscope (SEM) and manual observation by field emission SEM. We identified the chemical composition and morphology of major particle types in filter samples collected at different altitudes (e.g., 300 ft, 1000 ft, and 4500ft). Results from the flight upwind of Puerto Rico show that particles in the giant nuclei size range are dominated by sea salt. Samples collected at altitudes 300 ft and 1000 ft showed the highest number of sea salt particles and the samples collected at higher altitudes (> 4000 ft) showed the highest concentrations of clay material. HYSPLIT back trajectories for all samples showed that the low altitude samples initiated in the free troposphere in the Atlantic Ocean, which may account for the high sea salt content and that the source of the high altitude samples was closer to the Saharan - Sahel desert region and, therefore, these samples possibly had the influence of African dust. Size distribution results for quartz and unreacted sea-salt aerosols collected on the Giant Nuclei Impactor showed that sample RF08 - 12:05 UTM (300 ft) had the largest size value (mean = 2.936 μm) than all the other samples. Additional information was also obtained from the Wyoming Inlet present at the C - 130 aircraft which showed that size distribution results for all particles were smaller in size. The different mineral components of the dust have different size distributions so that a fractionation process could occur during transport. Also, the presence of supermicron sea salt at altitude is important for cloud processes.

  7. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    NASA Astrophysics Data System (ADS)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  8. Optimal spatial sampling techniques for ground truth data in microwave remote sensing of soil moisture

    NASA Technical Reports Server (NTRS)

    Rao, R. G. S.; Ulaby, F. T.

    1977-01-01

    The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.

  9. Treatment Trials for Neonatal Seizures: The Effect of Design on Sample Size

    PubMed Central

    Stevenson, Nathan J.; Boylan, Geraldine B.; Hellström-Westas, Lena; Vanhatalo, Sampsa

    2016-01-01

    Neonatal seizures are common in the neonatal intensive care unit. Clinicians treat these seizures with several anti-epileptic drugs (AEDs) to reduce seizures in a neonate. Current AEDs exhibit sub-optimal efficacy and several randomized control trials (RCT) of novel AEDs are planned. The aim of this study was to measure the influence of trial design on the required sample size of a RCT. We used seizure time courses from 41 term neonates with hypoxic ischaemic encephalopathy to build seizure treatment trial simulations. We used five outcome measures, three AED protocols, eight treatment delays from seizure onset (Td) and four levels of trial AED efficacy to simulate different RCTs. We performed power calculations for each RCT design and analysed the resultant sample size. We also assessed the rate of false positives, or placebo effect, in typical uncontrolled studies. We found that the false positive rate ranged from 5 to 85% of patients depending on RCT design. For controlled trials, the choice of outcome measure had the largest effect on sample size with median differences of 30.7 fold (IQR: 13.7–40.0) across a range of AED protocols, Td and trial AED efficacy (p<0.001). RCTs that compared the trial AED with positive controls required sample sizes with a median fold increase of 3.2 (IQR: 1.9–11.9; p<0.001). Delays in AED administration from seizure onset also increased the required sample size 2.1 fold (IQR: 1.7–2.9; p<0.001). Subgroup analysis showed that RCTs in neonates treated with hypothermia required a median fold increase in sample size of 2.6 (IQR: 2.4–3.0) compared to trials in normothermic neonates (p<0.001). These results show that RCT design has a profound influence on the required sample size. Trials that use a control group, appropriate outcome measure, and control for differences in Td between groups in analysis will be valid and minimise sample size. PMID:27824913

  10. A cautionary note on Bayesian estimation of population size by removal sampling with diffuse priors.

    PubMed

    Bord, Séverine; Bioche, Christèle; Druilhet, Pierre

    2018-05-01

    We consider the problem of estimating a population size by removal sampling when the sampling rate is unknown. Bayesian methods are now widespread and allow to include prior knowledge in the analysis. However, we show that Bayes estimates based on default improper priors lead to improper posteriors or infinite estimates. Similarly, weakly informative priors give unstable estimators that are sensitive to the choice of hyperparameters. By examining the likelihood, we show that population size estimates can be stabilized by penalizing small values of the sampling rate or large value of the population size. Based on theoretical results and simulation studies, we propose some recommendations on the choice of the prior. Then, we applied our results to real datasets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Effects of crystallite size on the structure and magnetism of ferrihydrite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xiaoming; Zhu, Mengqiang; Koopal, Luuk K.

    2015-12-15

    The structure and magnetic properties of nano-sized (1.6 to 4.4 nm) ferrihydrite samples are systematically investigated through a combination of X-ray diffraction (XRD), X-ray pair distribution function (PDF), X-ray absorption spectroscopy (XAS) and magnetic analyses. The XRD, PDF and Fe K-edge XAS data of the ferrihydrite samples are all fitted well with the Michel ferrihydrite model, indicating similar local-, medium- and long-range ordered structures. PDF and XAS fitting results indicate that, with increasing crystallite size, the average coordination numbers of Fe–Fe and the unit cell parameter c increase, while Fe2 and Fe3 vacancies and the unit cell parameter a decrease.more » Mössbauer results indicate that the surface layer is relatively disordered, which might have been caused by the random distribution of Fe vacancies. These results support Hiemstra's surface-depletion model in terms of the location of disorder and the variations of Fe2 and Fe3 occupancies with size. Magnetic data indicate that the ferrihydrite samples show antiferromagnetism superimposed with a ferromagnetic-like moment at lower temperatures (100 K and 10 K), but ferrihydrite is paramagnetic at room temperature. In addition, both the magnetization and coercivity decrease with increasing ferrihydrite crystallite size due to strong surface effects in fine-grained ferrihydrites. Smaller ferrihydrite samples show less magnetic hyperfine splitting and a lower unblocking temperature (T B) than larger samples. The dependence of magnetic properties on grain size for nano-sized ferrihydrite provides a practical way to determine the crystallite size of ferrihydrite quantitatively in natural environments or artificial systems.« less

  12. Are catchment-wide erosion rates really "Catchment-Wide"? Effects of grain size on erosion rates determined from 10Be

    NASA Astrophysics Data System (ADS)

    Reitz, M. A.; Seeber, L.; Schaefer, J. M.; Ferguson, E. K.

    2012-12-01

    Early studies pioneering the method for catchment wide erosion rates by measuring 10Be in alluvial sediment were taken at river mouths and used the sand size grain fraction from the riverbeds in order to average upstream erosion rates and measure erosion patterns. Finer particles (<0.0625 mm) were excluded to reduce the possibility of a wind-blown component of sediment and coarser particles (>2 mm) were excluded to better approximate erosion from the entire upstream catchment area (coarse grains are generally found near the source). Now that the sensitivity of 10Be measurements is rapidly increasing, we can precisely measure erosion rates from rivers eroding active tectonic regions. These active regions create higher energy drainage systems that erode faster and carry coarser sediment. In these settings, does the sand-sized fraction fully capture the average erosion of the upstream drainage area? Or does a different grain size fraction provide a more accurate measure of upstream erosion? During a study of the Neto River in Calabria, southern Italy, we took 8 samples along the length of the river, focusing on collecting samples just below confluences with major tributaries, in order to use the high-resolution erosion rate data to constrain tectonic motion. The samples we measured were sieved to either a 0.125 mm - 0.710 mm fraction or the 0.125 mm - 4 mm fraction (depending on how much of the former was available). After measuring these 8 samples for 10Be and determining erosion rates, we used the approach by Granger et al. [1996] to calculate the subcatchment erosion rates between each sample point. In the subcatchments of the river where we used grain sizes up to 4 mm, we measured very low 10Be concentrations (corresponding to high erosion rates) and calculated nonsensical subcatchment erosion rates (i.e. negative rates). We, therefore, hypothesize that the coarser grain sizes we included are preferentially sampling a smaller upstream area, and not the entire upstream catchment, which is assumed when measurements are based solely on the sand sized fraction. To test this hypothesis, we used samples with a variety of grain sizes from the Shillong Plateau. We sieved 5 samples into three grain size fractions: 0.125 mm - 710 mm, 710 mm - 4 mm, and >4 mm and measured 10Be concentrations in each fraction. Although there is some variation in the grain size fraction that yields the highest erosion rate, generally, the coarser grain size fractions have higher erosion rates. More significant are the results when calculating the subcatchment erosion rates, which suggest that even medium sized grains (710 mm - 4 mm) are sampling an area smaller than the entire upstream area; this finding is consistent with the nonsensical results from the Neto River study. This result has numerous implications for the interpretations of 10Be erosion rates: most importantly, an alluvial sample may not be averaging the entire upstream area, even when using the sand size fraction - resulting erosion rates more pertinent for that sample point rather than the entire catchment.

  13. Laser Diffraction Techniques Replace Sieving for Lunar Soil Particle Size Distribution Data

    NASA Technical Reports Server (NTRS)

    Cooper, Bonnie L.; Gonzalez, C. P.; McKay, D. S.; Fruland, R. L.

    2012-01-01

    Sieving was used extensively until 1999 to determine the particle size distribution of lunar samples. This method is time-consuming, and requires more than a gram of material in order to obtain a result in which one may have confidence. This is demonstrated by the difference in geometric mean and median for samples measured by [1], in which a 14-gram sample produced a geometric mean of approx.52 micrometers, whereas two other samples of 1.5 grams resulted in gave means of approx.63 and approx.69 micrometers. Sample allocations for sieving are typically much smaller than a gram, and many of the sample allocations received by our lab are 0.5 to 0.25 grams in mass. Basu [2] has described how the finest fraction of the soil is easily lost in the sieving process, and this effect is compounded when sample sizes are small.

  14. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    PubMed

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  15. Meta-analysis of genome-wide association from genomic prediction models

    USDA-ARS?s Scientific Manuscript database

    A limitation of many genome-wide association studies (GWA) in animal breeding is that there are many loci with small effect sizes; thus, larger sample sizes (N) are required to guarantee suitable power of detection. To increase sample size, results from different GWA can be combined in a meta-analys...

  16. Rock sampling. [apparatus for controlling particle size

    NASA Technical Reports Server (NTRS)

    Blum, P. (Inventor)

    1971-01-01

    An apparatus for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The device includes grinding means for cutting grooves in the rock surface and to provide a grouping of thin, shallow, parallel ridges and cutter means to reduce these ridges to a powder specimen. Collection means is provided for the powder. The invention relates to rock grinding and particularly to the sampling of rock specimens with good size control.

  17. What is the optimum sample size for the study of peatland testate amoeba assemblages?

    PubMed

    Mazei, Yuri A; Tsyganov, Andrey N; Esaulov, Anton S; Tychkov, Alexander Yu; Payne, Richard J

    2017-10-01

    Testate amoebae are widely used in ecological and palaeoecological studies of peatlands, particularly as indicators of surface wetness. To ensure data are robust and comparable it is important to consider methodological factors which may affect results. One significant question which has not been directly addressed in previous studies is how sample size (expressed here as number of Sphagnum stems) affects data quality. In three contrasting locations in a Russian peatland we extracted samples of differing size, analysed testate amoebae and calculated a number of widely-used indices: species richness, Simpson diversity, compositional dissimilarity from the largest sample and transfer function predictions of water table depth. We found that there was a trend for larger samples to contain more species across the range of commonly-used sample sizes in ecological studies. Smaller samples sometimes failed to produce counts of testate amoebae often considered minimally adequate. It seems likely that analyses based on samples of different sizes may not produce consistent data. Decisions about sample size need to reflect trade-offs between logistics, data quality, spatial resolution and the disturbance involved in sample extraction. For most common ecological applications we suggest that samples of more than eight Sphagnum stems are likely to be desirable. Copyright © 2017 Elsevier GmbH. All rights reserved.

  18. Reduced Sampling Size with Nanopipette for Tapping-Mode Scanning Probe Electrospray Ionization Mass Spectrometry Imaging

    PubMed Central

    Kohigashi, Tsuyoshi; Otsuka, Yoichi; Shimazu, Ryo; Matsumoto, Takuya; Iwata, Futoshi; Kawasaki, Hideya; Arakawa, Ryuichi

    2016-01-01

    Mass spectrometry imaging (MSI) with ambient sampling and ionization can rapidly and easily capture the distribution of chemical components in a solid sample. Because the spatial resolution of MSI is limited by the size of the sampling area, reducing sampling size is an important goal for high resolution MSI. Here, we report the first use of a nanopipette for sampling and ionization by tapping-mode scanning probe electrospray ionization (t-SPESI). The spot size of the sampling area of a dye molecular film on a glass substrate was decreased to 6 μm on average by using a nanopipette. On the other hand, ionization efficiency increased with decreasing solvent flow rate. Our results indicate the compatibility between a reduced sampling area and the ionization efficiency using a nanopipette. MSI of micropatterns of ink on a glass and a polymer substrate were also demonstrated. PMID:28101441

  19. Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed

    NASA Astrophysics Data System (ADS)

    Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi

    2010-05-01

    To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.

  20. Evaluating sampling strategy for DNA barcoding study of coastal and inland halo-tolerant Poaceae and Chenopodiaceae: A case study for increased sample size

    PubMed Central

    Yao, Peng-Cheng; Gao, Hai-Yan; Wei, Ya-Nan; Zhang, Jian-Hang; Chen, Xiao-Yong

    2017-01-01

    Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P < 0.01). These results suggest that increasing the sample size in specialist habitats can improve measurements of intraspecific genetic diversity, and will have a positive effect on the application of the DNA barcodes in widely distributed species. The results of random sampling showed that when sample size reached 11 for Chloris virgata, Chenopodium glaucum, and Dysphania ambrosioides, 13 for Setaria viridis, and 15 for Eleusine indica, Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11–15. PMID:28934362

  1. Evaluating sampling strategy for DNA barcoding study of coastal and inland halo-tolerant Poaceae and Chenopodiaceae: A case study for increased sample size.

    PubMed

    Yao, Peng-Cheng; Gao, Hai-Yan; Wei, Ya-Nan; Zhang, Jian-Hang; Chen, Xiao-Yong; Li, Hong-Qing

    2017-01-01

    Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P < 0.01). These results suggest that increasing the sample size in specialist habitats can improve measurements of intraspecific genetic diversity, and will have a positive effect on the application of the DNA barcodes in widely distributed species. The results of random sampling showed that when sample size reached 11 for Chloris virgata, Chenopodium glaucum, and Dysphania ambrosioides, 13 for Setaria viridis, and 15 for Eleusine indica, Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.

  2. Sample size in psychological research over the past 30 years.

    PubMed

    Marszalek, Jacob M; Barber, Carolyn; Kohlhart, Julie; Holmes, Cooper B

    2011-04-01

    The American Psychological Association (APA) Task Force on Statistical Inference was formed in 1996 in response to a growing body of research demonstrating methodological issues that threatened the credibility of psychological research, and made recommendations to address them. One issue was the small, even dramatically inadequate, size of samples used in studies published by leading journals. The present study assessed the progress made since the Task Force's final report in 1999. Sample sizes reported in four leading APA journals in 1955, 1977, 1995, and 2006 were compared using nonparametric statistics, while data from the last two waves were fit to a hierarchical generalized linear growth model for more in-depth analysis. Overall, results indicate that the recommendations for increasing sample sizes have not been integrated in core psychological research, although results slightly vary by field. This and other implications are discussed in the context of current methodological critique and practice.

  3. Blinded and unblinded internal pilot study designs for clinical trials with count data.

    PubMed

    Schneider, Simon; Schmidli, Heinz; Friede, Tim

    2013-07-01

    Internal pilot studies are a popular design feature to address uncertainties in the sample size calculations caused by vague information on nuisance parameters. Despite their popularity, only very recently blinded sample size reestimation procedures for trials with count data were proposed and their properties systematically investigated. Although blinded procedures are favored by regulatory authorities, practical application is somewhat limited by fears that blinded procedures are prone to bias if the treatment effect was misspecified in the planning. Here, we compare unblinded and blinded procedures with respect to bias, error rates, and sample size distribution. We find that both procedures maintain the desired power and that the unblinded procedure is slightly liberal whereas the actual significance level of the blinded procedure is close to the nominal level. Furthermore, we show that in situations where uncertainty about the assumed treatment effect exists, the blinded estimator of the control event rate is biased in contrast to the unblinded estimator, which results in differences in mean sample sizes in favor of the unblinded procedure. However, these differences are rather small compared to the deviations of the mean sample sizes from the sample size required to detect the true, but unknown effect. We demonstrate that the variation of the sample size resulting from the blinded procedure is in many practically relevant situations considerably smaller than the one of the unblinded procedures. The methods are extended to overdispersed counts using a quasi-likelihood approach and are illustrated by trials in relapsing multiple sclerosis. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Heavy metals relationship with water and size-fractionated sediments in rivers using canonical correlation analysis (CCA) case study, rivers of south western Caspian Sea.

    PubMed

    Vosoogh, Ali; Saeedi, Mohsen; Lak, Raziyeh

    2016-11-01

    Some pollutants can qualitatively affect aquatic freshwater such as rivers, and heavy metals are one of the most important pollutants in aquatic fresh waters. Heavy metals can be found in the form of components dissolved in these waters or in compounds with suspended particles and surface sediments. It can be said that heavy metals are in equilibrium between water and sediment. In this study, the amount of heavy metals is determined in water and different sizes of sediment. To obtain the relationship between heavy metals in water and size-fractionated sediments, a canonical correlation analysis (CCA) was utilized in rivers of the southwestern Caspian Sea. In this research, a case study was carried out on 18 sampling stations in nine rivers. In the first step, the concentrations of heavy metals (Cu, Zn, Cr, Fe, Mn, Pb, Ni, and Cd) were determined in water and size-fractionated sediment samples. Water sampling sites were classified by hierarchical cluster analysis (HCA) utilizing squared Euclidean distance with Ward's method. In addition, for interpreting the obtained results and the relationships between the concentration of heavy metals in the tested river water and sample sediments, canonical correlation analysis (CCA) was utilized. The rivers were grouped into two classes (those having no pollution and those having low pollution) based on the HCA results obtained for river water samples. CCA results found numerous relationships between rivers in Iran's Guilan province and their size-fractionated sediments samples. The heavy metals of sediments with 0.038 to 0.125 mm size in diameter are slightly correlated with those of water samples.

  5. Technology Tips: Sample Too Small? Probably Not!

    ERIC Educational Resources Information Center

    Strayer, Jeremy F.

    2013-01-01

    Statistical studies are referenced in the news every day, so frequently that people are sometimes skeptical of reported results. Often, no matter how large a sample size researchers use in their studies, people believe that the sample size is too small to make broad generalizations. The tasks presented in this article use simulations of repeated…

  6. Size Effect on the Mechanical Properties of CF Winding Composite

    NASA Astrophysics Data System (ADS)

    Cui, Yuqing; Yin, Zhongwei

    2017-12-01

    Mechanical properties of filament winding composites are usually tested by NOL ring samples. Few people have studied the size effect of winding composite samples on the testing result of mechanical property. In this research, winding composite thickness, diameter, and geometry of NOL ring samples were prepared to investigate the size effect on the mechanical strength of carbon fiber (CF) winding composite. The CF T700, T1000, M40, and M50 were adopted for the winding composite, while the matrix was epoxy resin. Test results show that the tensile strength and ILSS of composites decreases monotonically with an increase of thickness from 1 mm to 4 mm. The mechanical strength of composite samples increases monotonically with the increase in diameter from 100 mm to 189 mm. The mechanical strength of composite samples with two flat sides are higher than those of cyclic annular samples.

  7. Thermal conductivity measurements of particulate materials: 3. Natural samples and mixtures of particle sizes

    NASA Astrophysics Data System (ADS)

    Presley, Marsha A.; Craddock, Robert A.

    2006-09-01

    A line-heat source apparatus was used to measure thermal conductivities of natural fluvial and eolian particulate sediments under low pressures of a carbon dioxide atmosphere. These measurements were compared to a previous compilation of the dependence of thermal conductivity on particle size to determine a thermal conductivity-derived particle size for each sample. Actual particle-size distributions were determined via physical separation through brass sieves. Comparison of the two analyses indicates that the thermal conductivity reflects the larger particles within the samples. In each sample at least 85-95% of the particles by weight are smaller than or equal to the thermal conductivity-derived particle size. At atmospheric pressures less than about 2-3 torr, samples that contain a large amount of small particles (<=125 μm or 4 Φ) exhibit lower thermal conductivities relative to those for the larger particles within the sample. Nonetheless, 90% of the sample by weight still consists of particles that are smaller than or equal to this lower thermal conductivity-derived particle size. These results allow further refinement in the interpretation of geomorphologic processes acting on the Martian surface. High-energy fluvial environments should produce poorer-sorted and coarser-grained deposits than lower energy eolian environments. Hence these results will provide additional information that may help identify coarser-grained fluvial deposits and may help differentiate whether channel dunes are original fluvial sediments that are at most reworked by wind or whether they represent a later overprint of sediment with a separate origin.

  8. The albatross plot: A novel graphical tool for presenting results of diversely reported studies in a systematic review

    PubMed Central

    Jones, Hayley E.; Martin, Richard M.; Lewis, Sarah J.; Higgins, Julian P.T.

    2017-01-01

    Abstract Meta‐analyses combine the results of multiple studies of a common question. Approaches based on effect size estimates from each study are generally regarded as the most informative. However, these methods can only be used if comparable effect sizes can be computed from each study, and this may not be the case due to variation in how the studies were done or limitations in how their results were reported. Other methods, such as vote counting, are then used to summarize the results of these studies, but most of these methods are limited in that they do not provide any indication of the magnitude of effect. We propose a novel plot, the albatross plot, which requires only a 1‐sided P value and a total sample size from each study (or equivalently a 2‐sided P value, direction of effect and total sample size). The plot allows an approximate examination of underlying effect sizes and the potential to identify sources of heterogeneity across studies. This is achieved by drawing contours showing the range of effect sizes that might lead to each P value for given sample sizes, under simple study designs. We provide examples of albatross plots using data from previous meta‐analyses, allowing for comparison of results, and an example from when a meta‐analysis was not possible. PMID:28453179

  9. Influence of multidroplet size distribution on icing collection efficiency

    NASA Technical Reports Server (NTRS)

    Chang, H.-P.; Kimble, K. R.; Frost, W.; Shaw, R. J.

    1983-01-01

    Calculation of collection efficiencies of two-dimensional airfoils for a monodispersed droplet icing cloud and a multidispersed droplet is carried out. Comparison is made with the experimental results reported in the NACA Technical Note series. The results of the study show considerably improved agreement with experiment when multidroplet size distributions are employed. The study then investigates the effect of collection efficiency on airborne particle droplet size sampling instruments. The biased effect introduced due to sampling from different collection volumes is predicted.

  10. A modified approach to estimating sample size for simple logistic regression with one continuous covariate.

    PubMed

    Novikov, I; Fund, N; Freedman, L S

    2010-01-15

    Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.

  11. Color-size Relations of Disc Galaxies with Similar Stellar Masses

    NASA Astrophysics Data System (ADS)

    Fu, W.; Chang, R. X.; Shen, S. Y.; Zhang, B.

    2011-01-01

    To investigate the correlations between colors and sizes of disc galaxies with similar stellar masses, a sample of 7959 local face-on disc galaxies is collected from the main galaxy sample of the Seventh Data Release of Sloan Digital Sky Survey (SDSS DR7). Our results show that, under the condition that the stellar masses of disc galaxies are similar, the relation between u-r and size is weak, while g-r, r-i and r-z colors decrease with disk size. This means that the color-size relations of disc galaxies with similar stellar masses do exist, i.e., the more extended disc galaxies with similar stellar masses tend to have bluer colors. An artificial sample is constructed to confirm that this correlation is not driven by the color-stellar mass relations and size-stellar mass relation of disc galaxies. Our results suggest that the mass distribution of disk galaxies may have an important influence on their stellar formation history, i.e., the galaxies with more extended mass distribution evolve more slowly.

  12. Confidence intervals for the population mean tailored to small sample sizes, with applications to survey sampling.

    PubMed

    Rosenblum, Michael A; Laan, Mark J van der

    2009-01-07

    The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes. We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability under much weaker assumptions than are required for standard methods. A drawback of this approach, as we show, is that these confidence intervals are often quite wide. In response to this, we present a method for constructing much narrower confidence intervals, which are better suited for practical applications, and that are still more robust than confidence intervals based on standard methods, when dealing with small sample sizes. We show how to extend our approaches to much more general estimation problems than estimating the sample mean. We describe how these methods can be used to obtain more reliable confidence intervals in survey sampling. As a concrete example, we construct confidence intervals using our methods for the number of violent deaths between March 2003 and July 2006 in Iraq, based on data from the study "Mortality after the 2003 invasion of Iraq: A cross sectional cluster sample survey," by Burnham et al. (2006).

  13. Influence of sampling window size and orientation on parafoveal cone packing density

    PubMed Central

    Lombardo, Marco; Serrao, Sebastiano; Ducoli, Pietro; Lombardo, Giuseppe

    2013-01-01

    We assessed the agreement between sampling windows of different size and orientation on packing density estimates in images of the parafoveal cone mosaic acquired using a flood-illumination adaptive optics retinal camera. Horizontal and vertical oriented sampling windows of different size (320x160 µm, 160x80 µm and 80x40 µm) were selected in two retinal locations along the horizontal meridian in one eye of ten subjects. At each location, cone density tended to decline with decreasing sampling area. Although the differences in cone density estimates were not statistically significant, Bland-Altman plots showed that the agreement between cone density estimated within the different sampling window conditions was moderate. The percentage of the preferred packing arrangements of cones by Voronoi tiles was slightly affected by window size and orientation. The results illustrated the high importance of specifying the size and orientation of the sampling window used to derive cone metric estimates to facilitate comparison of different studies. PMID:24009995

  14. Simulation analyses of space use: Home range estimates, variability, and sample size

    USGS Publications Warehouse

    Bekoff, Marc; Mech, L. David

    1984-01-01

    Simulations of space use by animals were run to determine the relationship among home range area estimates, variability, and sample size (number of locations). As sample size increased, home range size increased asymptotically, whereas variability decreased among mean home range area estimates generated by multiple simulations for the same sample size. Our results suggest that field workers should ascertain between 100 and 200 locations in order to estimate reliably home range area. In some cases, this suggested guideline is higher than values found in the few published studies in which the relationship between home range area and number of locations is addressed. Sampling differences for small species occupying relatively small home ranges indicate that fewer locations may be sufficient to allow for a reliable estimate of home range. Intraspecific variability in social status (group member, loner, resident, transient), age, sex, reproductive condition, and food resources also have to be considered, as do season, habitat, and differences in sampling and analytical methods. Comparative data still are needed.

  15. Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.

    PubMed

    Wang, Zuozhen

    2018-01-01

    Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.

  16. Combining gas-phase electrophoretic mobility molecular analysis (GEMMA), light scattering, field flow fractionation and cryo electron microscopy in a multidimensional approach to characterize liposomal carrier vesicles

    PubMed Central

    Gondikas, Andreas; von der Kammer, Frank; Hofmann, Thilo; Marchetti-Deschmann, Martina; Allmaier, Günter; Marko-Varga, György; Andersson, Roland

    2017-01-01

    For drug delivery, characterization of liposomes regarding size, particle number concentrations, occurrence of low-sized liposome artefacts and drug encapsulation are of importance to understand their pharmacodynamic properties. In our study, we aimed to demonstrate the applicability of nano Electrospray Gas-Phase Electrophoretic Mobility Molecular Analyser (nES GEMMA) as a suitable technique for analyzing these parameters. We measured number-based particle concentrations, identified differences in size between nominally identical liposomal samples, and detected the presence of low-diameter material which yielded bimodal particle size distributions. Subsequently, we compared these findings to dynamic light scattering (DLS) data and results from light scattering experiments coupled to Asymmetric Flow-Field Flow Fractionation (AF4), the latter improving the detectability of smaller particles in polydisperse samples due to a size separation step prior detection. However, the bimodal size distribution could not be detected due to method inherent limitations. In contrast, cryo transmission electron microscopy corroborated nES GEMMA results. Hence, gas-phase electrophoresis proved to be a versatile tool for liposome characterization as it could analyze both vesicle size and size distribution. Finally, a correlation of nES GEMMA results with cell viability experiments was carried out to demonstrate the importance of liposome batch-to-batch control as low-sized sample components possibly impact cell viability. PMID:27639623

  17. Investigation of the Specht density estimator

    NASA Technical Reports Server (NTRS)

    Speed, F. M.; Rydl, L. M.

    1971-01-01

    The feasibility of using the Specht density estimator function on the IBM 360/44 computer is investigated. Factors such as storage, speed, amount of calculations, size of the smoothing parameter and sample size have an effect on the results. The reliability of the Specht estimator for normal and uniform distributions and the effects of the smoothing parameter and sample size are investigated.

  18. Conditional Optimal Design in Three- and Four-Level Experiments

    ERIC Educational Resources Information Center

    Hedges, Larry V.; Borenstein, Michael

    2014-01-01

    The precision of estimates of treatment effects in multilevel experiments depends on the sample sizes chosen at each level. It is often desirable to choose sample sizes at each level to obtain the smallest variance for a fixed total cost, that is, to obtain optimal sample allocation. This article extends previous results on optimal allocation to…

  19. Investigating the effect of sputtering conditions on the physical properties of aluminum thin film and the resulting alumina template

    NASA Astrophysics Data System (ADS)

    Taheriniya, Shabnam; Parhizgar, Sara Sadat; Sari, Amir Hossein

    2018-06-01

    To study the alumina template pore size distribution as a function of Al thin film grain size distribution, porous alumina templates were prepared by anodizing sputtered aluminum thin films. To control the grain size the aluminum samples were sputtered with the rate of 0.5, 1 and 2 Å/s and the substrate temperature was either 25, 75 or 125 °C. All samples were anodized for 120 s in 1 M sulfuric acid solution kept at 1 °C while a 15 V potential was being applied. The standard deviation value for samples deposited at room temperature but with different rates is roughly 2 nm in both thin film and porous template form but it rises to approximately 4 nm with substrate temperature. Samples with the average grain size of 13, 14, 18.5 and 21 nm respectively produce alumina templates with an average pore size of 8.5, 10, 15 and 16 nm in that order which shows the average grain size limits the average pore diameter in the resulting template. Lateral correlation length and grain boundary effect are other factors that affect the pore formation process and pore size distribution by limiting the initial current density.

  20. Influence of pore size distributions on decomposition of maize leaf residue: evidence from X-ray computed micro-tomography

    NASA Astrophysics Data System (ADS)

    Negassa, Wakene; Guber, Andrey; Kravchenko, Alexandra; Rivers, Mark

    2014-05-01

    Soil's potential to sequester carbon (C) depends not only on quality and quantity of organic inputs to soil but also on the residence time of the applied organic inputs within the soil. Soil pore structure is one of the main factors that influence residence time of soil organic matter by controlling gas exchange, soil moisture and microbial activities, thereby soil C sequestration capacity. Previous attempts to investigate the fate of organic inputs added to soil did not allow examining their decomposition in situ; the drawback that can now be remediated by application of X-ray computed micro-tomography (µ-CT). The non-destructive and non-invasive nature of µ-CT gives an opportunity to investigate the effect of soil pore size distributions on decomposition of plant residues at a new quantitative level. The objective of this study is to examine the influence of pore size distributions on the decomposition of plant residue added to soil. Samples with contrasting pore size distributions were created using aggregate fractions of five different sizes (<0.05, 0.05-0.1, 0.10-05, 0.5-1.0 and 1.0-2.0 mm). Weighted average pore diameters ranged from 10 µm (<0.05 mm fraction) to 104 µm (1-2 mm fraction), while maximum pore diameter were in a range from 29 µm (<0.05 mm fraction) to 568 µm (1-2 mm fraction) in the created soil samples. Dried pieces of maize leaves 2.5 mg in size (equivalent to 1.71 mg C g-1 soil) were added to half of the studied samples. Samples with and without maize leaves were incubated for 120 days. CO2 emission from the samples was measured at regular time intervals. In order to ensure that the observed differences are due to differences in pore structure and not due to differences in inherent properties of the studied aggregate fractions, we repeated the whole experiment using soil from the same aggregate size fractions but ground to <0.05 mm size. Five to six replicated samples were used for intact and ground samples of all sizes with and without leaves. Two replications of the intact aggregate fractions of all sizes with leaves were subjected to µ-CT scanning before and after incubation, whereas all the remaining replications of both intact and ground aggregate fractions of <0.05, 0.05-0.1, and 1.0-2.0 mm sizes with leaves were scanned with µ-CT after the incubation. The µ-CT image showed that approximately 80% of the leaves in the intact samples of large aggregate fractions (0.5-1.0 and 1.0-2.0 mm) was decomposed during the incubation, while only 50-60% of the leaves were decomposed in the intact samples of smaller sized fractions. Even lower percent of leaves (40-50%) was decomposed in the ground samples, with very similar leaf decomposition observed in all ground samples regardless of the aggregate fraction size. Consistent with µ-CT results, the proportion of decomposed leaf estimated with the conventional mass loss method was 48% and 60% for the <0.05 mm and 1.0-2.0 mm soil size fractions of intact aggregates, and 40-50% in ground samples, respectively. The results of the incubation experiment demonstrated that, while greater C mineralization was observed in samples of all size fractions amended with leaf, the effect of leaf presence was most pronounced in the smaller aggregate fractions (0.05-0.1 mm and 0.05 mm) of intact aggregates. The results of the present study unequivocally demonstrate that differences in pore size distributions have a major effect on the decomposition of plant residues added to soil. Moreover, in presence of plant residues, differences in pore size distributions appear to also influence the rates of decomposition of the intrinsic soil organic material.

  1. Methods for sample size determination in cluster randomized trials

    PubMed Central

    Rutterford, Clare; Copas, Andrew; Eldridge, Sandra

    2015-01-01

    Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515

  2. Three-year-olds obey the sample size principle of induction: the influence of evidence presentation and sample size disparity on young children's generalizations.

    PubMed

    Lawson, Chris A

    2014-07-01

    Three experiments with 81 3-year-olds (M=3.62years) examined the conditions that enable young children to use the sample size principle (SSP) of induction-the inductive rule that facilitates generalizations from large rather than small samples of evidence. In Experiment 1, children exhibited the SSP when exemplars were presented sequentially but not when exemplars were presented simultaneously. Results from Experiment 3 suggest that the advantage of sequential presentation is not due to the additional time to process the available input from the two samples but instead may be linked to better memory for specific individuals in the large sample. In addition, findings from Experiments 1 and 2 suggest that adherence to the SSP is mediated by the disparity between presented samples. Overall, these results reveal that the SSP appears early in development and is guided by basic cognitive processes triggered during the acquisition of input. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Adequacy of laser diffraction for soil particle size analysis

    PubMed Central

    Fisher, Peter; Aumann, Colin; Chia, Kohleth; O'Halloran, Nick; Chandra, Subhash

    2017-01-01

    Sedimentation has been a standard methodology for particle size analysis since the early 1900s. In recent years laser diffraction is beginning to replace sedimentation as the prefered technique in some industries, such as marine sediment analysis. However, for the particle size analysis of soils, which have a diverse range of both particle size and shape, laser diffraction still requires evaluation of its reliability. In this study, the sedimentation based sieve plummet balance method and the laser diffraction method were used to measure the particle size distribution of 22 soil samples representing four contrasting Australian Soil Orders. Initially, a precise wet riffling methodology was developed capable of obtaining representative samples within the recommended obscuration range for laser diffraction. It was found that repeatable results were obtained even if measurements were made at the extreme ends of the manufacturer’s recommended obscuration range. Results from statistical analysis suggested that the use of sample pretreatment to remove soil organic carbon (and possible traces of calcium-carbonate content) made minor differences to the laser diffraction particle size distributions compared to no pretreatment. These differences were found to be marginally statistically significant in the Podosol topsoil and Vertosol subsoil. There are well known reasons why sedimentation methods may be considered to ‘overestimate’ plate-like clay particles, while laser diffraction will ‘underestimate’ the proportion of clay particles. In this study we used Lin’s concordance correlation coefficient to determine the equivalence of laser diffraction and sieve plummet balance results. The results suggested that the laser diffraction equivalent thresholds corresponding to the sieve plummet balance cumulative particle sizes of < 2 μm, < 20 μm, and < 200 μm, were < 9 μm, < 26 μm, < 275 μm respectively. The many advantages of laser diffraction for soil particle size analysis, and the empirical results of this study, suggest that deployment of laser diffraction as a standard test procedure can provide reliable results, provided consistent sample preparation is used. PMID:28472043

  4. Phase II Trials for Heterogeneous Patient Populations with a Time-to-Event Endpoint.

    PubMed

    Jung, Sin-Ho

    2017-07-01

    In this paper, we consider a single-arm phase II trial with a time-to-event end-point. We assume that the study population has multiple subpopulations with different prognosis, but the study treatment is expected to be similarly efficacious across the subpopulations. We review a stratified one-sample log-rank test and present its sample size calculation method under some practical design settings. Our sample size method requires specification of the prevalence of subpopulations. We observe that the power of the resulting sample size is not very sensitive to misspecification of the prevalence.

  5. Neuromuscular dose-response studies: determining sample size.

    PubMed

    Kopman, A F; Lien, C A; Naguib, M

    2011-02-01

    Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.

  6. Degradation resistance of 3Y-TZP ceramics sintered using spark plasma sintering

    NASA Astrophysics Data System (ADS)

    Chintapalli, R.; Marro, F. G.; Valle, J. A.; Yan, H.; Reece, M. J.; Anglada, M.

    2009-09-01

    Commercially available tetragonal zirconia powder doped with 3 mol% of yttria has been sintered using spark plasma sintering (SPS) and has been investigated for its resistance to hydrothermal degradation. Samples were sintered at 1100, 1150, 1175 and 1600 °C at constant pressure of 100 MPa and soaking for 5 minutes, and the grain sizes obtained were 65, 90, 120 and 800 nm, respectively. Samples sintered conventionally with a grain size of 300 nm were also compared with samples sintered using SPS. Finely polished samples were subjected to artificial degradation at 131 °C for 60 hours in vapour in auto clave under a pressure of 2 bars. The XRD studies show no phase transformation in samples with low density and small grain size (<200 nm), but significant phase transformation is seen in dense samples with larger grain size (>300 nm). Results are discussed in terms of present theories of hydrothermal degradation.

  7. Suitability of river delta sediment as proppant, Missouri and Niobrara Rivers, Nebraska and South Dakota, 2015

    USGS Publications Warehouse

    Zelt, Ronald B.; Hobza, Christopher M.; Burton, Bethany L.; Schaepe, Nathaniel J.; Piatak, Nadine

    2017-11-16

    Sediment management is a challenge faced by reservoir managers who have several potential options, including dredging, for mitigation of storage capacity lost to sedimentation. As sediment is removed from reservoir storage, potential use of the sediment for socioeconomic or ecological benefit could potentially defray some costs of its removal. Rivers that transport a sandy sediment load will deposit the sand load along a reservoir-headwaters reach where the current of the river slackens progressively as its bed approaches and then descends below the reservoir water level. Given a rare combination of factors, a reservoir deposit of alluvial sand has potential to be suitable for use as proppant for hydraulic fracturing in unconventional oil and gas development. In 2015, the U.S. Geological Survey began a program of researching potential sources of proppant sand from reservoirs, with an initial focus on the Missouri River subbasins that receive sand loads from the Nebraska Sand Hills. This report documents the methods and results of assessments of the suitability of river delta sediment as proppant for a pilot study area in the delta headwaters of Lewis and Clark Lake, Nebraska and South Dakota. Results from surface-geophysical surveys of electrical resistivity guided borings to collect 3.7-meter long cores at 25 sites on delta sandbars using the direct-push method to recover duplicate, 3.8-centimeter-diameter cores in April 2015. In addition, the U.S. Geological Survey collected samples of upstream sand sources in the lower Niobrara River valley.At the laboratory, samples were dried, weighed, washed, dried, and weighed again. Exploratory analysis of natural sand for determining its suitability as a proppant involved application of a modified subset of the standard protocols known as American Petroleum Institute (API) Recommended Practice (RP) 19C. The RP19C methods were not intended for exploration-stage evaluation of raw materials. Results for the washed samples are not directly applicable to evaluations of suitability for use as fracture sand because, except for particle-size distribution, the API-recommended practices for assessing proppant properties (sphericity, roundness, bulk density, and crush resistance) require testing of specific proppant size classes. An optical imaging particle-size analyzer was used to make measurements of particle-size distribution and particle shape. Measured samples were sieved to separate the dominant-size fraction, and the separated subsample was further tested for roundness, sphericity, bulk density, and crush resistance.For the bulk washed samples collected from the Missouri River delta, the geometric mean size averaged 0.27 millimeters (mm), 80 percent of the samples were predominantly sand in the API 40/70 size class, and 17 percent were predominantly sand in the API 70/140 size class. Distributions of geometric mean size among the four sandbar complexes were similar, but samples collected from sandbar complex B were slightly coarser sand than those from the other three complexes. The average geometric mean sizes among the four sandbar complexes ranged only from 0.26 to 0.30 mm. For 22 main-stem sampling locations along the lower Niobrara River, geometric mean size averaged 0.26 mm, an average of 61 percent was sand in the API 40/70 size class, and 28 percent was sand in the API 70/140 size class. Average composition for lower Niobrara River samples was 48 percent medium sand, 37 percent fine sand, and about 7 percent each very fine sand and coarse sand fractions. On average, samples were moderately well sorted.Particle shape and strength were assessed for the dominant-size class of each sample. For proppant strength, crush resistance was tested at a predetermined level of stress (34.5 megapascals [MPa], or 5,000 pounds-force per square inch). To meet the API minimum requirement for proppant, after the crush test not more than 10 percent of the tested sample should be finer than the precrush dominant-size class. For particle shape, all samples surpassed the recommended minimum criteria for sphericity and roundness, with most samples being well-rounded. For proppant strength, of 57 crush-resistance tested Missouri River delta samples of 40/70-sized sand, 23 (40 percent) were interpreted as meeting the minimum criterion at 34.5 MPa, or 5,000 pounds-force per square inch. Of 12 tested samples of 70/140-sized sand, 9 (75 percent) of the Missouri River delta samples had less than 10 percent fines by volume following crush testing, achieving the minimum criterion at 34.5 MPa. Crush resistance for delta samples was strongest at sandbar complex A, where 67 percent of tested samples met the 10-percent fines criterion at the 34.5-MPa threshold. This frequency was higher than was indicated by samples from sandbar complexes B, C, and D that had rates of 50, 46, and 42 percent, respectively. The group of sandbar complex A samples also contained the largest percentages of samples dominated by the API 70/140 size class, which overall had a higher percentage of samples meeting the minimum criterion compared to samples dominated by coarser size classes; however, samples from sandbar complex A that had the API 40/70 size class tested also had a higher rate for meeting the minimum criterion (57 percent) than did samples from sandbar complexes B, C, and D (50, 43, and 40 percent, respectively). For samples collected along the lower Niobrara River, of the 25 tested samples of 40/70-sized sand, 9 samples passed the API minimum criterion at 34.5 MPa, but only 3 samples passed the more-stringent criterion of 8 percent postcrush fines. All four tested samples of 70/140 sand passed the minimum criterion at 34.5 MPa, with postcrush fines percentage of at most 4.1 percent.For two reaches of the lower Niobrara River, where hydraulic sorting was energized artificially by the hydraulic head drop at and immediately downstream from Spencer Dam, suitability of channel deposits for potential use as fracture sand was confirmed by test results. All reach A washed samples were well-rounded and had sphericity scores above 0.65, and samples for 80 percent of sampled locations met the crush-resistance criterion at the 34.5-MPa stress level. A conservative lower-bound estimate of sand volume in the reach A deposits was about 86,000 cubic meters. All reach B samples were well-rounded but sphericity averaged 0.63, a little less than the average for upstream reaches A and SP. All four samples tested passed the crush-resistance test at 34.5 MPa. Of three reach B sandbars, two had no more than 3 percent fines after the crush test, surpassing more stringent criteria for crush resistance that accept a maximum of 6 percent fines following the crush test for the API 70/140 size class.Relative to the crush-resistance test results for the API 40/70 size fraction of two samples of mine output from Loup River settling-basin dredge spoils near Genoa, Nebr., four of five reach A sample locations compared favorably. The four samples had increases in fines composition of 1.6–5.9 percentage points, whereas fines in the two mine-output samples increased by an average 6.8 percentage points.

  8. Relationships fade with time: a meta-analysis of temporal trends in publication in ecology and evolution.

    PubMed Central

    Jennions, Michael D; Møller, Anders P

    2002-01-01

    Both significant positive and negative relationships between the magnitude of research findings (their 'effect size') and their year of publication have been reported in a few areas of biology. These trends have been attributed to Kuhnian paradigm shifts, scientific fads and bias in the choice of study systems. Here we test whether or not these isolated cases reflect a more general trend. We examined the relationship using effect sizes extracted from 44 peer-reviewed meta-analyses covering a wide range of topics in ecological and evolutionary biology. On average, there was a small but significant decline in effect size with year of publication. For the original empirical studies there was also a significant decrease in effect size as sample size increased. However, the effect of year of publication remained even after we controlled for sampling effort. Although these results have several possible explanations, it is suggested that a publication bias against non-significant or weaker findings offers the most parsimonious explanation. As in the medical sciences, non-significant results may take longer to publish and studies with both small sample sizes and non-significant results may be less likely to be published. PMID:11788035

  9. Rasch fit statistics and sample size considerations for polytomous data.

    PubMed

    Smith, Adam B; Rush, Robert; Fallowfield, Lesley J; Velikova, Galina; Sharpe, Michael

    2008-05-29

    Previous research on educational data has demonstrated that Rasch fit statistics (mean squares and t-statistics) are highly susceptible to sample size variation for dichotomously scored rating data, although little is known about this relationship for polytomous data. These statistics help inform researchers about how well items fit to a unidimensional latent trait, and are an important adjunct to modern psychometrics. Given the increasing use of Rasch models in health research the purpose of this study was therefore to explore the relationship between fit statistics and sample size for polytomous data. Data were collated from a heterogeneous sample of cancer patients (n = 4072) who had completed both the Patient Health Questionnaire - 9 and the Hospital Anxiety and Depression Scale. Ten samples were drawn with replacement for each of eight sample sizes (n = 25 to n = 3200). The Rating and Partial Credit Models were applied and the mean square and t-fit statistics (infit/outfit) derived for each model. The results demonstrated that t-statistics were highly sensitive to sample size, whereas mean square statistics remained relatively stable for polytomous data. It was concluded that mean square statistics were relatively independent of sample size for polytomous data and that misfit to the model could be identified using published recommended ranges.

  10. Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review

    PubMed Central

    Morris, Tom; Gray, Laura

    2017-01-01

    Objectives To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Setting Any, not limited to healthcare settings. Participants Any taking part in an SW-CRT published up to March 2016. Primary and secondary outcome measures The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Results Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22–0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Conclusions Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. PMID:29146637

  11. A post hoc evaluation of a sample size re-estimation in the Secondary Prevention of Small Subcortical Strokes study.

    PubMed

    McClure, Leslie A; Szychowski, Jeff M; Benavente, Oscar; Hart, Robert G; Coffey, Christopher S

    2016-10-01

    The use of adaptive designs has been increasing in randomized clinical trials. Sample size re-estimation is a type of adaptation in which nuisance parameters are estimated at an interim point in the trial and the sample size re-computed based on these estimates. The Secondary Prevention of Small Subcortical Strokes study was a randomized clinical trial assessing the impact of single- versus dual-antiplatelet therapy and control of systolic blood pressure to a higher (130-149 mmHg) versus lower (<130 mmHg) target on recurrent stroke risk in a two-by-two factorial design. A sample size re-estimation was performed during the Secondary Prevention of Small Subcortical Strokes study resulting in an increase from the planned sample size of 2500-3020, and we sought to determine the impact of the sample size re-estimation on the study results. We assessed the results of the primary efficacy and safety analyses with the full 3020 patients and compared them to the results that would have been observed had randomization ended with 2500 patients. The primary efficacy outcome considered was recurrent stroke, and the primary safety outcomes were major bleeds and death. We computed incidence rates for the efficacy and safety outcomes and used Cox proportional hazards models to examine the hazard ratios for each of the two treatment interventions (i.e. the antiplatelet and blood pressure interventions). In the antiplatelet intervention, the hazard ratio was not materially modified by increasing the sample size, nor did the conclusions regarding the efficacy of mono versus dual-therapy change: there was no difference in the effect of dual- versus monotherapy on the risk of recurrent stroke hazard ratios (n = 3020 HR (95% confidence interval): 0.92 (0.72, 1.2), p = 0.48; n = 2500 HR (95% confidence interval): 1.0 (0.78, 1.3), p = 0.85). With respect to the blood pressure intervention, increasing the sample size resulted in less certainty in the results, as the hazard ratio for higher versus lower systolic blood pressure target approached, but did not achieve, statistical significance with the larger sample (n = 3020 HR (95% confidence interval): 0.81 (0.63, 1.0), p = 0.089; n = 2500 HR (95% confidence interval): 0.89 (0.68, 1.17), p = 0.40). The results from the safety analyses were similar to 3020 and 2500 patients for both study interventions. Other trial-related factors, such as contracts, finances, and study management, were impacted as well. Adaptive designs can have benefits in randomized clinical trials, but do not always result in significant findings. The impact of adaptive designs should be measured in terms of both trial results, as well as practical issues related to trial management. More post hoc analyses of study adaptations will lead to better understanding of the balance between the benefits and the costs. © The Author(s) 2016.

  12. The albatross plot: A novel graphical tool for presenting results of diversely reported studies in a systematic review.

    PubMed

    Harrison, Sean; Jones, Hayley E; Martin, Richard M; Lewis, Sarah J; Higgins, Julian P T

    2017-09-01

    Meta-analyses combine the results of multiple studies of a common question. Approaches based on effect size estimates from each study are generally regarded as the most informative. However, these methods can only be used if comparable effect sizes can be computed from each study, and this may not be the case due to variation in how the studies were done or limitations in how their results were reported. Other methods, such as vote counting, are then used to summarize the results of these studies, but most of these methods are limited in that they do not provide any indication of the magnitude of effect. We propose a novel plot, the albatross plot, which requires only a 1-sided P value and a total sample size from each study (or equivalently a 2-sided P value, direction of effect and total sample size). The plot allows an approximate examination of underlying effect sizes and the potential to identify sources of heterogeneity across studies. This is achieved by drawing contours showing the range of effect sizes that might lead to each P value for given sample sizes, under simple study designs. We provide examples of albatross plots using data from previous meta-analyses, allowing for comparison of results, and an example from when a meta-analysis was not possible. Copyright © 2017 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd.

  13. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.

    PubMed

    Morgan, Timothy M; Case, L Douglas

    2013-07-05

    In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.

  14. Influence of size-fractioning techniques on concentrations of selected trace metals in bottom materials from two streams in northeastern Ohio

    USGS Publications Warehouse

    Koltun, G.F.; Helsel, Dennis R.

    1986-01-01

    Identical stream-bottom material samples, when fractioned to the same size by different techniques, may contain significantly different trace-metal concentrations. Precision of techniques also may differ, which could affect the ability to discriminate between size-fractioned bottom-material samples having different metal concentrations. Bottom-material samples fractioned to less than 0.020 millimeters by means of three common techniques (air elutriation, sieving, and settling) were analyzed for six trace metals to determine whether the technique used to obtain the desired particle-size fraction affects the ability to discriminate between bottom materials having different trace-metal concentrations. In addition, this study attempts to assess whether median trace-metal concentrations in size-fractioned bottom materials of identical origin differ depending on the size-fractioning technique used. Finally, this study evaluates the efficiency of the three size-fractioning techniques in terms of time, expense, and effort involved. Bottom-material samples were collected at two sites in northeastern Ohio: One is located in an undeveloped forested basin, and the other is located in a basin having a mixture of industrial and surface-mining land uses. The sites were selected for their close physical proximity, similar contributing drainage areas, and the likelihood that trace-metal concentrations in the bottom materials would be significantly different. Statistically significant differences in the concentrations of trace metals were detected between bottom-material samples collected at the two sites when the samples had been size-fractioned by means of air elutriation or sieving. Statistical analyses of samples that had been size fractioned by settling in native water were not measurably different in any of the six trace metals analyzed. Results of multiple comparison tests suggest that differences related to size-fractioning technique were evident in median copper, lead, and iron concentrations. Technique-related differences in copper concentrations most likely resulted from contamination of air-elutriated samples by a feed tip on the elutriator apparatus. No technique-related differences were observed in chromium, manganese, or zinc concentrations. Although air elutriation was the most expensive sizefractioning technique investigated, samples fractioned by this technique appeared to provide a superior level of discrimination between metal concentrations present in the bottom materials of the two sites. Sieving was an adequate lower-cost but more laborintensive alternative.

  15. Estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean.

    PubMed

    Schillaci, Michael A; Schillaci, Mario E

    2009-02-01

    The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (n<10) or very small (n < or = 5) sample sizes. This method can be used by researchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.

  16. Using known map category marginal frequencies to improve estimates of thematic map accuracy

    NASA Technical Reports Server (NTRS)

    Card, D. H.

    1982-01-01

    By means of two simple sampling plans suggested in the accuracy-assessment literature, it is shown how one can use knowledge of map-category relative sizes to improve estimates of various probabilities. The fact that maximum likelihood estimates of cell probabilities for the simple random sampling and map category-stratified sampling were identical has permitted a unified treatment of the contingency-table analysis. A rigorous analysis of the effect of sampling independently within map categories is made possible by results for the stratified case. It is noted that such matters as optimal sample size selection for the achievement of a desired level of precision in various estimators are irrelevant, since the estimators derived are valid irrespective of how sample sizes are chosen.

  17. Sample size determination for estimating antibody seroconversion rate under stable malaria transmission intensity.

    PubMed

    Sepúlveda, Nuno; Drakeley, Chris

    2015-04-03

    In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.

  18. Modeling the transport of engineered nanoparticles in saturated porous media - an experimental setup

    NASA Astrophysics Data System (ADS)

    Braun, A.; Neukum, C.; Azzam, R.

    2011-12-01

    The accelerating production and application of engineered nanoparticles is causing concerns regarding their release and fate in the environment. For assessing the risk that is posed to drinking water resources it is important to understand the transport and retention mechanisms of engineered nanoparticles in soil and groundwater. In this study an experimental setup for analyzing the mobility of silver and titanium dioxide nanoparticles in saturated porous media is presented. Batch and column experiments with glass beads and two different soils as matrices are carried out under varied conditions to study the impact of electrolyte concentration and pore water velocities. The analysis of nanoparticles implies several challenges, such as the detection and characterization and the preparation of a well dispersed sample with defined properties, as nanoparticles tend to form agglomerates when suspended in an aqueous medium. The analytical part of the experiments is mainly undertaken with Flow Field-Flow Fractionation (FlFFF). This chromatography like technique separates a particulate sample according to size. It is coupled to a UV/Vis and a light scattering detector for analyzing concentration and size distribution of the sample. The advantage of this technique is the ability to analyze also complex environmental samples, such as the effluent of column experiments including soil components, and the gentle sample treatment. For optimization of the sample preparation and for getting a first idea of the aggregation behavior in soil solutions, in sedimentation experiments the effect of ionic strength, sample concentration and addition of a surfactant on particle or aggregate size and temporal dispersion stability was investigated. In general the samples are more stable the lower the concentration of particles is. For TiO2 nanoparticles, the addition of a surfactant yielded the most stable samples with smallest aggregate sizes. Furthermore the suspension stability is increasing with electrolyte concentration. Depending on the dispersing medium the results show that TiO2 nanoparticles tend to form aggregates between 100-200 nm in diameter while the primary particle size is given as 21 nm by the manufacturer. Aggregate sizes are increasing with time. The particle size distribution of the silver nanoparticle samples is quite uniform in each medium. The fresh samples show aggregate sizes between 40 and 45 nm while the primary particle size is 15 nm according to the manufacturer. Aggregate size is only slightly increasing with time during the sedimentation experiments. These results are used as a reference when analyzing the effluent of column experiments.

  19. Application-Specific Graph Sampling for Frequent Subgraph Mining and Community Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Purohit, Sumit; Choudhury, Sutanay; Holder, Lawrence B.

    Graph mining is an important data analysis methodology, but struggles as the input graph size increases. The scalability and usability challenges posed by such large graphs make it imperative to sample the input graph and reduce its size. The critical challenge in sampling is to identify the appropriate algorithm to insure the resulting analysis does not suffer heavily from the data reduction. Predicting the expected performance degradation for a given graph and sampling algorithm is also useful. In this paper, we present different sampling approaches for graph mining applications such as Frequent Subgrpah Mining (FSM), and Community Detection (CD). Wemore » explore graph metrics such as PageRank, Triangles, and Diversity to sample a graph and conclude that for heterogeneous graphs Triangles and Diversity perform better than degree based metrics. We also present two new sampling variations for targeted graph mining applications. We present empirical results to show that knowledge of the target application, along with input graph properties can be used to select the best sampling algorithm. We also conclude that performance degradation is an abrupt, rather than gradual phenomena, as the sample size decreases. We present the empirical results to show that the performance degradation follows a logistic function.« less

  20. The effect of sample holder material on ion mobility spectrometry reproducibility

    NASA Technical Reports Server (NTRS)

    Jadamec, J. Richard; Su, Chih-Wu; Rigdon, Stephen; Norwood, Lavan

    1995-01-01

    When a positive detection of a narcotic occurs during the search of a vessel, a decision has to be made whether further intensive search is warranted. This decision is based in part on the results of a second sample collected from the same area. Therefore, the reproducibility of both sampling and instrumental analysis is critical in terms of justifying an in depth search. As reported at the 2nd Annual IMS Conference in Quebec City, the U.S. Coast Guard has determined that when paper is utilized as the sample desorption medium for the Barringer IONSCAN, the analytical results using standard reference samples are reproducible. A study was conducted utilizing papers of varying pore sizes and comparing their performance as a desorption material relative to the standard Barringer 50 micron Teflon. Nominal pore sizes ranged from 30 microns down to 2 microns. Results indicate that there is some peak instability in the first two to three windows during the analysis. The severity of the instability was observed to increase as the pore size of the paper is decreased. However, the observed peak instability does not create a situation that results in a decreased reliability or reproducibility in the analytical result.

  1. Marine sources of ice nucleating particles: results from phytoplankton cultures and samples collected at sea

    NASA Astrophysics Data System (ADS)

    Wilbourn, E.; Thornton, D.; Brooks, S. D.; Graff, J.

    2016-12-01

    The role of marine aerosols as ice nucleating particles is currently poorly understood. Despite growing interest, there are remarkably few ice nucleation measurements on representative marine samples. Here we present results of heterogeneous ice nucleation from laboratory studies and in-situ air and sea water samples collected during NAAMES (North Atlantic Aerosol and Marine Ecosystems Study). Thalassiosira weissflogii (CCMP 1051) was grown under controlled conditions in batch cultures and the ice nucleating activity depended on the growth phase of the cultures. Immersion freezing temperatures of the lab-grown diatoms were determined daily using a custom ice nucleation apparatus cooled at a set rate. Our results show that the age of the culture had a significant impact on ice nucleation temperature, with samples in stationary phase causing nucleation at -19.9 °C, approximately nine degrees warmer than the freezing temperature during exponential growth phase. Field samples gathered during the NAAMES II cruise in May 2016 were also tested for ice nucleating ability. Two types of samples were gathered. Firstly, whole cells were fractionated by size from surface seawater using a BD Biosciences Influx Cell Sorter (BD BS ISD). Secondly, aerosols were generated using the SeaSweep and subsequently size-selected using a PIXE Cascade Impactor. Samples were tested for the presence of ice nucleating particles (INP) using the technique described above. There were significant differences in the freezing temperature of the different samples; of the three sample types the lab-grown cultures tested during stationary phase froze at the warmest temperatures, followed by the SeaSweep samples (-25.6 °C) and the size-fractionated cell samples (-31.3 °C). Differences in ice nucleation ability may be due to size differences between the INP, differences in chemical composition of the sample, or some combination of these two factors. Results will be presented and atmospheric implications discussed.

  2. Sample size re-assessment leading to a raised sample size does not inflate type I error rate under mild conditions.

    PubMed

    Broberg, Per

    2013-07-19

    One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.

  3. Ranked set sampling: cost and optimal set size.

    PubMed

    Nahhas, Ramzi W; Wolfe, Douglas A; Chen, Haiying

    2002-12-01

    McIntyre (1952, Australian Journal of Agricultural Research 3, 385-390) introduced ranked set sampling (RSS) as a method for improving estimation of a population mean in settings where sampling and ranking of units from the population are inexpensive when compared with actual measurement of the units. Two of the major factors in the usefulness of RSS are the set size and the relative costs of the various operations of sampling, ranking, and measurement. In this article, we consider ranking error models and cost models that enable us to assess the effect of different cost structures on the optimal set size for RSS. For reasonable cost structures, we find that the optimal RSS set sizes are generally larger than had been anticipated previously. These results will provide a useful tool for determining whether RSS is likely to lead to an improvement over simple random sampling in a given setting and, if so, what RSS set size is best to use in this case.

  4. A general approach for sample size calculation for the three-arm 'gold standard' non-inferiority design.

    PubMed

    Stucke, Kathrin; Kieser, Meinhard

    2012-12-10

    In the three-arm 'gold standard' non-inferiority design, an experimental treatment, an active reference, and a placebo are compared. This design is becoming increasingly popular, and it is, whenever feasible, recommended for use by regulatory guidelines. We provide a general method to calculate the required sample size for clinical trials performed in this design. As special cases, the situations of continuous, binary, and Poisson distributed outcomes are explored. Taking into account the correlation structure of the involved test statistics, the proposed approach leads to considerable savings in sample size as compared with application of ad hoc methods for all three scale levels. Furthermore, optimal sample size allocation ratios are determined that result in markedly smaller total sample sizes as compared with equal assignment. As optimal allocation makes the active treatment groups larger than the placebo group, implementation of the proposed approach is also desirable from an ethical viewpoint. Copyright © 2012 John Wiley & Sons, Ltd.

  5. A single test for rejecting the null hypothesis in subgroups and in the overall sample.

    PubMed

    Lin, Yunzhi; Zhou, Kefei; Ganju, Jitendra

    2017-01-01

    In clinical trials, some patient subgroups are likely to demonstrate larger effect sizes than other subgroups. For example, the effect size, or informally the benefit with treatment, is often greater in patients with a moderate condition of a disease than in those with a mild condition. A limitation of the usual method of analysis is that it does not incorporate this ordering of effect size by patient subgroup. We propose a test statistic which supplements the conventional test by including this information and simultaneously tests the null hypothesis in pre-specified subgroups and in the overall sample. It results in more power than the conventional test when the differences in effect sizes across subgroups are at least moderately large; otherwise it loses power. The method involves combining p-values from models fit to pre-specified subgroups and the overall sample in a manner that assigns greater weight to subgroups in which a larger effect size is expected. Results are presented for randomized trials with two and three subgroups.

  6. Regression modeling of particle size distributions in urban storm water: advancements through improved sample collection methods

    USGS Publications Warehouse

    Fienen, Michael N.; Selbig, William R.

    2012-01-01

    A new sample collection system was developed to improve the representation of sediment entrained in urban storm water by integrating water quality samples from the entire water column. The depth-integrated sampler arm (DISA) was able to mitigate sediment stratification bias in storm water, thereby improving the characterization of suspended-sediment concentration and particle size distribution at three independent study locations. Use of the DISA decreased variability, which improved statistical regression to predict particle size distribution using surrogate environmental parameters, such as precipitation depth and intensity. The performance of this statistical modeling technique was compared to results using traditional fixed-point sampling methods and was found to perform better. When environmental parameters can be used to predict particle size distributions, environmental managers have more options when characterizing concentrations, loads, and particle size distributions in urban runoff.

  7. An empirical analysis of the quantitative effect of data when fitting quadratic and cubic polynomials

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1974-01-01

    A study is made of the extent to which the size of the sample affects the accuracy of a quadratic or a cubic polynomial approximation of an experimentally observed quantity, and the trend with regard to improvement in the accuracy of the approximation as a function of sample size is established. The task is made possible through a simulated analysis carried out by the Monte Carlo method in which data are simulated by using several transcendental or algebraic functions as models. Contaminated data of varying amounts are fitted to either quadratic or cubic polynomials, and the behavior of the mean-squared error of the residual variance is determined as a function of sample size. Results indicate that the effect of the size of the sample is significant only for relatively small sizes and diminishes drastically for moderate and large amounts of experimental data.

  8. Combining gas-phase electrophoretic mobility molecular analysis (GEMMA), light scattering, field flow fractionation and cryo electron microscopy in a multidimensional approach to characterize liposomal carrier vesicles.

    PubMed

    Urey, Carlos; Weiss, Victor U; Gondikas, Andreas; von der Kammer, Frank; Hofmann, Thilo; Marchetti-Deschmann, Martina; Allmaier, Günter; Marko-Varga, György; Andersson, Roland

    2016-11-20

    For drug delivery, characterization of liposomes regarding size, particle number concentrations, occurrence of low-sized liposome artefacts and drug encapsulation are of importance to understand their pharmacodynamic properties. In our study, we aimed to demonstrate the applicability of nano Electrospray Gas-Phase Electrophoretic Mobility Molecular Analyser (nES GEMMA) as a suitable technique for analyzing these parameters. We measured number-based particle concentrations, identified differences in size between nominally identical liposomal samples, and detected the presence of low-diameter material which yielded bimodal particle size distributions. Subsequently, we compared these findings to dynamic light scattering (DLS) data and results from light scattering experiments coupled to Asymmetric Flow-Field Flow Fractionation (AF4), the latter improving the detectability of smaller particles in polydisperse samples due to a size separation step prior detection. However, the bimodal size distribution could not be detected due to method inherent limitations. In contrast, cryo transmission electron microscopy corroborated nES GEMMA results. Hence, gas-phase electrophoresis proved to be a versatile tool for liposome characterization as it could analyze both vesicle size and size distribution. Finally, a correlation of nES GEMMA results with cell viability experiments was carried out to demonstrate the importance of liposome batch-to-batch control as low-sized sample components possibly impact cell viability. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Internet Pornography Use and Sexual Body Image in a Dutch Sample

    PubMed Central

    Cranney, Stephen

    2016-01-01

    Objectives A commonly attributed cause of sexual body image dissatisfaction is pornography use. This relationship has received little verification. Methods The relationship between sexual body image dissatisfaction and Internet pornography use was tested using a large-N sample of Dutch respondents. Results/Conclusion Penis size dissatisfaction is associated with pornography use. The relationship between pornography use and breast size dissatisfaction is null. These results support prior speculation and self-reports about the relationship between pornography use and sexual body image among men. These results also support a prior null finding of the relationship between breast size satisfaction for women and pornography use. PMID:26918066

  10. Atomistic origin of size effects in fatigue behavior of metallic glasses

    NASA Astrophysics Data System (ADS)

    Sha, Zhendong; Wong, Wei Hin; Pei, Qingxiang; Branicio, Paulo Sergio; Liu, Zishun; Wang, Tiejun; Guo, Tianfu; Gao, Huajian

    2017-07-01

    While many experiments and simulations on metallic glasses (MGs) have focused on their tensile ductility under monotonic loading, the fatigue mechanisms of MGs under cyclic loading still remain largely elusive. Here we perform molecular dynamics (MD) and finite element simulations of tension-compression fatigue tests in MGs to elucidate their fatigue mechanisms with focus on the sample size effect. Shear band (SB) thickening is found to be the inherent fatigue mechanism for nanoscale MGs. The difference in fatigue mechanisms between macroscopic and nanoscale MGs originates from whether the SB forms partially or fully through the cross-section of the specimen. Furthermore, a qualitative investigation of the sample size effect suggests that small sample size increases the fatigue life while large sample size promotes cyclic softening and necking. Our observations on the size-dependent fatigue behavior can be rationalized by the Gurson model and the concept of surface tension of the nanovoids. The present study sheds light on the fatigue mechanisms of MGs and can be useful in interpreting previous experimental results.

  11. Affected States Soft Independent Modeling by Class Analogy from the Relation Between Independent Variables, Number of Independent Variables and Sample Size

    PubMed Central

    Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, İrem Ersöz

    2013-01-01

    Objective: The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Study Design: Simulation study. Material and Methods: SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Results: Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. Conclusion: It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values. PMID:25207065

  12. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    PubMed

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  13. Particle-size dependence on metal(loid) distributions in mine wastes: Implications for water contamination and human exposure

    USGS Publications Warehouse

    Kim, C.S.; Wilson, K.M.; Rytuba, J.J.

    2011-01-01

    The mining and processing of metal-bearing ores has resulted in contamination issues where waste materials from abandoned mines remain in piles of untreated and unconsolidated material, posing the potential for waterborne and airborne transport of toxic elements. This study presents a systematic method of particle size separation, mass distribution, and bulk chemical analysis for mine tailings and adjacent background soil samples from the Rand historic mining district, California, in order to assess particle size distribution and related trends in metal(loid) concentration as a function of particle size. Mine tailings produced through stamp milling and leaching processes were found to have both a narrower and finer particle size distribution than background samples, with significant fractions of particles available in a size range (???250 ??m) that could be incidentally ingested. In both tailings and background samples, the majority of trace metal(loid)s display an inverse relationship between concentration and particle size, resulting in higher proportions of As, Cr, Cu, Pb and Zn in finer-sized fractions which are more susceptible to both water- and wind-borne transport as well as ingestion and/or inhalation. Established regulatory screening levels for such elements may, therefore, significantly underestimate potential exposure risk if relying solely on bulk sample concentrations to guide remediation decisions. Correlations in elemental concentration trends (such as between As and Fe) indicate relationships between elements that may be relevant to their chemical speciation. ?? 2011 Elsevier Ltd.

  14. An imbalance in cluster sizes does not lead to notable loss of power in cross-sectional, stepped-wedge cluster randomised trials with a continuous outcome.

    PubMed

    Kristunas, Caroline A; Smith, Karen L; Gray, Laura J

    2017-03-07

    The current methodology for sample size calculations for stepped-wedge cluster randomised trials (SW-CRTs) is based on the assumption of equal cluster sizes. However, as is often the case in cluster randomised trials (CRTs), the clusters in SW-CRTs are likely to vary in size, which in other designs of CRT leads to a reduction in power. The effect of an imbalance in cluster size on the power of SW-CRTs has not previously been reported, nor what an appropriate adjustment to the sample size calculation should be to allow for any imbalance. We aimed to assess the impact of an imbalance in cluster size on the power of a cross-sectional SW-CRT and recommend a method for calculating the sample size of a SW-CRT when there is an imbalance in cluster size. The effect of varying degrees of imbalance in cluster size on the power of SW-CRTs was investigated using simulations. The sample size was calculated using both the standard method and two proposed adjusted design effects (DEs), based on those suggested for CRTs with unequal cluster sizes. The data were analysed using generalised estimating equations with an exchangeable correlation matrix and robust standard errors. An imbalance in cluster size was not found to have a notable effect on the power of SW-CRTs. The two proposed adjusted DEs resulted in trials that were generally considerably over-powered. We recommend that the standard method of sample size calculation for SW-CRTs be used, provided that the assumptions of the method hold. However, it would be beneficial to investigate, through simulation, what effect the maximum likely amount of inequality in cluster sizes would be on the power of the trial and whether any inflation of the sample size would be required.

  15. Four hundred or more participants needed for stable contingency table estimates of clinical prediction rule performance.

    PubMed

    Kent, Peter; Boyle, Eleanor; Keating, Jennifer L; Albert, Hanne B; Hartvigsen, Jan

    2017-02-01

    To quantify variability in the results of statistical analyses based on contingency tables and discuss the implications for the choice of sample size for studies that derive clinical prediction rules. An analysis of three pre-existing sets of large cohort data (n = 4,062-8,674) was performed. In each data set, repeated random sampling of various sample sizes, from n = 100 up to n = 2,000, was performed 100 times at each sample size and the variability in estimates of sensitivity, specificity, positive and negative likelihood ratios, posttest probabilities, odds ratios, and risk/prevalence ratios for each sample size was calculated. There were very wide, and statistically significant, differences in estimates derived from contingency tables from the same data set when calculated in sample sizes below 400 people, and typically, this variability stabilized in samples of 400-600 people. Although estimates of prevalence also varied significantly in samples below 600 people, that relationship only explains a small component of the variability in these statistical parameters. To reduce sample-specific variability, contingency tables should consist of 400 participants or more when used to derive clinical prediction rules or test their performance. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Fabrication and Characterization of Surrogate Glasses Aimed to Validate Nuclear Forensic Techniques

    DTIC Science & Technology

    2017-12-01

    sample is processed while submerged and produces fine sized particles the exposure levels and risk of contamination from the samples is also greatly...induced the partial collapses of the xerogel network strengthened the network while the sample sizes were reduced [22], [26]. As a result the wt...inhomogeneous, making it difficult to clearly determine which features were present in the sample before LDHP and which were caused by it. In this study

  17. Size Matters: FTIR Spectral Analysis of Apollo Regolith Samples Exhibits Grain Size Dependence.

    NASA Astrophysics Data System (ADS)

    Martin, Dayl; Joy, Katherine; Pernet-Fisher, John; Wogelius, Roy; Morlok, Andreas; Hiesinger, Harald

    2017-04-01

    The Mercury Thermal Infrared Spectrometer (MERTIS) on the upcoming BepiColombo mission is designed to analyse the surface of Mercury in thermal infrared wavelengths (7-14 μm) to investigate the physical properties of the surface materials [1]. Laboratory analyses of analogue materials are useful for investigating how various sample properties alter the resulting infrared spectrum. Laboratory FTIR analysis of Apollo fine (<1mm) soil samples 14259,672, 15401,147, and 67481,96 have provided an insight into how grain size, composition, maturity (i.e., exposure to space weathering processes), and proportion of glassy material affect their average infrared spectra. Each of these samples was analysed as a bulk sample and five size fractions: <25, 25-63, 63-125, 125-250, and <250 μm. Sample 14259,672 is a highly mature highlands regolith with a large proportion of agglutinates [2]. The high agglutinate content (>60%) causes a 'flattening' of the spectrum, with reduced reflectance in the Reststrahlen Band region (RB) as much as 30% in comparison to samples that are dominated by a high proportion of crystalline material. Apollo 15401,147 is an immature regolith with a high proportion of volcanic glass pyroclastic beads [2]. The high mafic mineral content results in a systematic shift in the Christiansen Feature (CF - the point of lowest reflectance) to longer wavelength: 8.6 μm. The glass beads dominate the spectrum, displaying a broad peak around the main Si-O stretch band (at 10.8 μm). As such, individual mineral components of this sample cannot be resolved from the average spectrum alone. Apollo 67481,96 is a sub-mature regolith composed dominantly of anorthite plagioclase [2]. The CF position of the average spectrum is shifted to shorter wavelengths (8.2 μm) due to the higher proportion of felsic minerals. Its average spectrum is dominated by anorthite reflectance bands at 8.7, 9.1, 9.8, and 10.8 μm. The average reflectance is greater than the other samples due to a lower proportion of glassy material. In each soil, the smallest fractions (0-25 and 25-63 μm) have CF positions 0.1-0.4 μm higher than the larger grain sizes. Also, the bulk-sample spectra mostly closely resemble the 0-25 μm sieved size fraction spectrum, indicating that this size fraction of each sample dominates the bulk spectrum regardless of other physical properties. This has implications for surface analyses of other Solar System bodies where some mineral phases or components could be concentrated in a particular size fraction. For example, the anorthite grains in 67481,96 are dominantly >25 μm in size and therefore may not contribute proportionally to the bulk average spectrum (compared to the <25 μm fraction). The resulting bulk spectrum of 67481,96 has a CF position 0.2 μm higher than all size fractions >25 microns and therefore does not represent a true average composition of the sample. Further investigation of how grain size and composition alters the average spectrum is required to fully understand infrared spectra of planetary surfaces. [1] - Hiesinger H., Helbert J., and MERTIS Co-I Team. (2010). The Mercury Radiometer and Thermal Infrared Spectrometer (MERTIS) for the BepiColombo Mission. Planetary and Space Science. 58, 144-165. [2] - NASA Lunar Sample Compendium. https://curator.jsc.nasa.gov/lunar/lsc/

  18. Photographic techniques for characterizing streambed particle sizes

    USGS Publications Warehouse

    Whitman, Matthew S.; Moran, Edward H.; Ourso, Robert T.

    2003-01-01

    We developed photographic techniques to characterize coarse (>2-mm) and fine (≤2-mm) streambed particle sizes in 12 streams in Anchorage, Alaska. Results were compared with current sampling techniques to assess which provided greater sampling efficiency and accuracy. The streams sampled were wadeable and contained gravel—cobble streambeds. Gradients ranged from about 5% at the upstream sites to about 0.25% at the downstream sites. Mean particle sizes and size-frequency distributions resulting from digitized photographs differed significantly from those resulting from Wolman pebble counts for five sites in the analysis. Wolman counts were biased toward selecting larger particles. Photographic analysis also yielded a greater number of measured particles (mean = 989) than did the Wolman counts (mean = 328). Stream embeddedness ratings assigned from field and photographic observations were significantly different at 5 of the 12 sites, although both types of ratings showed a positive relationship with digitized surface fines. Visual estimates of embeddedness and digitized surface fines may both be useful indicators of benthic conditions, but digitizing surface fines produces quantitative rather than qualitative data. Benefits of the photographic techniques include reduced field time, minimal streambed disturbance, convenience of postfield processing, easy sample archiving, and improved accuracy and replication potential.

  19. The feasibility of using explicit method for linear correction of the particle size variation using NIR Spectroscopy combined with PLS2regression method

    NASA Astrophysics Data System (ADS)

    Yulia, M.; Suhandy, D.

    2018-03-01

    NIR spectra obtained from spectral data acquisition system contains both chemical information of samples as well as physical information of the samples, such as particle size and bulk density. Several methods have been established for developing calibration models that can compensate for sample physical information variations. One common approach is to include physical information variation in the calibration model both explicitly and implicitly. The objective of this study was to evaluate the feasibility of using explicit method to compensate the influence of different particle size of coffee powder in NIR calibration model performance. A number of 220 coffee powder samples with two different types of coffee (civet and non-civet) and two different particle sizes (212 and 500 µm) were prepared. Spectral data was acquired using NIR spectrometer equipped with an integrating sphere for diffuse reflectance measurement. A discrimination method based on PLS-DA was conducted and the influence of different particle size on the performance of PLS-DA was investigated. In explicit method, we add directly the particle size as predicted variable results in an X block containing only the NIR spectra and a Y block containing the particle size and type of coffee. The explicit inclusion of the particle size into the calibration model is expected to improve the accuracy of type of coffee determination. The result shows that using explicit method the quality of the developed calibration model for type of coffee determination is a little bit superior with coefficient of determination (R2) = 0.99 and root mean square error of cross-validation (RMSECV) = 0.041. The performance of the PLS2 calibration model for type of coffee determination with particle size compensation was quite good and able to predict the type of coffee in two different particle sizes with relatively high R2 pred values. The prediction also resulted in low bias and RMSEP values.

  20. Modeling grain size variations of aeolian gypsum deposits at White Sands, New Mexico, using AVIRIS imagery

    USGS Publications Warehouse

    Ghrefat, H.A.; Goodell, P.C.; Hubbard, B.E.; Langford, R.P.; Aldouri, R.E.

    2007-01-01

    Visible and Near-Infrared (VNIR) through Short Wavelength Infrared (SWIR) (0.4-2.5????m) AVIRIS data, along with laboratory spectral measurements and analyses of field samples, were used to characterize grain size variations in aeolian gypsum deposits across barchan-transverse, parabolic, and barchan dunes at White Sands, New Mexico, USA. All field samples contained a mineralogy of ?????100% gypsum. In order to document grain size variations at White Sands, surficial gypsum samples were collected along three Transects parallel to the prevailing downwind direction. Grain size analyses were carried out on the samples by sieving them into seven size fractions ranging from 45 to 621????m, which were subjected to spectral measurements. Absorption band depths of the size fractions were determined after applying an automated continuum-removal procedure to each spectrum. Then, the relationship between absorption band depth and gypsum size fraction was established using a linear regression. Three software processing steps were carried out to measure the grain size variations of gypsum in the Dune Area using AVIRIS data. AVIRIS mapping results, field work and laboratory analysis all show that the interdune areas have lower absorption band depth values and consist of finer grained gypsum deposits. In contrast, the dune crest areas have higher absorption band depth values and consist of coarser grained gypsum deposits. Based on laboratory estimates, a representative barchan-transverse dune (Transect 1) has a mean grain size of 1.16 ??{symbol} (449????m). The error bar results show that the error ranges from - 50 to + 50????m. Mean grain size for a representative parabolic dune (Transect 2) is 1.51 ??{symbol} (352????m), and 1.52 ??{symbol} (347????m) for a representative barchan dune (Transect 3). T-test results confirm that there are differences in the grain size distributions between barchan and parabolic dunes and between interdune and dune crest areas. The t-test results also show that there are no significant differences between modeled and laboratory-measured grain size values. Hyperspectral grain size modeling can help to determine dynamic processes shaping the formation of the dunes such as wind directions, and the relative strengths of winds through time. This has implications for studying such processes on other planetary landforms that have mineralogy with unique absorption bands in VNIR-SWIR hyperspectral data. ?? 2006 Elsevier B.V. All rights reserved.

  1. Does increasing the size of bi-weekly samples of records influence results when using the Global Trigger Tool? An observational study of retrospective record reviews of two different sample sizes.

    PubMed

    Mevik, Kjersti; Griffin, Frances A; Hansen, Tonje E; Deilkås, Ellen T; Vonen, Barthold

    2016-04-25

    To investigate the impact of increasing sample of records reviewed bi-weekly with the Global Trigger Tool method to identify adverse events in hospitalised patients. Retrospective observational study. A Norwegian 524-bed general hospital trust. 1920 medical records selected from 1 January to 31 December 2010. Rate, type and severity of adverse events identified in two different samples sizes of records selected as 10 and 70 records, bi-weekly. In the large sample, 1.45 (95% CI 1.07 to 1.97) times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) were identified than in the small sample (27.2 adverse events/1000 patient days). Hospital-acquired infections were the most common category of adverse events in both the samples, and the distributions of the other categories of adverse events did not differ significantly between the samples. The distribution of severity level of adverse events did not differ between the samples. The findings suggest that while the distribution of categories and severity are not dependent on the sample size, the rate of adverse events is. Further studies are needed to conclude if the optimal sample size may need to be adjusted based on the hospital size in order to detect a more accurate rate of adverse events. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  2. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments

    PubMed Central

    2013-01-01

    Background Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. Results To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations. The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. Conclusions We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs. PMID:24160725

  3. Field size, length, and width distributions based on LACIE ground truth data. [large area crop inventory experiment

    NASA Technical Reports Server (NTRS)

    Pitts, D. E.; Badhwar, G.

    1980-01-01

    The development of agricultural remote sensing systems requires knowledge of agricultural field size distributions so that the sensors, sampling frames, image interpretation schemes, registration systems, and classification systems can be properly designed. Malila et al. (1976) studied the field size distribution for wheat and all other crops in two Kansas LACIE (Large Area Crop Inventory Experiment) intensive test sites using ground observations of the crops and measurements of their field areas based on current year rectified aerial photomaps. The field area and size distributions reported in the present investigation are derived from a representative subset of a stratified random sample of LACIE sample segments. In contrast to previous work, the obtained results indicate that most field-size distributions are not log-normally distributed. The most common field size observed in this study was 10 acres for most crops studied.

  4. Validation of abundance estimates from mark–recapture and removal techniques for rainbow trout captured by electrofishing in small streams

    USGS Publications Warehouse

    Rosenberger, Amanda E.; Dunham, Jason B.

    2005-01-01

    Estimation of fish abundance in streams using the removal model or the Lincoln - Peterson mark - recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams. For one-, two-, three-, and four-pass sampling effort in closed sites, we evaluated the influences of fish size and habitat characteristics on sampling efficiency and the accuracy of removal abundance estimates. We also examined the use of models to generate unbiased estimates of fish abundance through adjustment of total catch or biased removal estimates. Our results suggested that the assumptions of the mark - recapture model were satisfied and that abundance estimates based on this approach were unbiased. In contrast, the removal model assumptions were not met. Decreasing sampling efficiencies over removal passes resulted in underestimated population sizes and overestimates of sampling efficiency. This bias decreased, but was not eliminated, with increased sampling effort. Biased removal estimates based on different levels of effort were highly correlated with each other but were less correlated with unbiased mark - recapture estimates. Stream size decreased sampling efficiency, and stream size and instream wood increased the negative bias of removal estimates. We found that reliable estimates of population abundance could be obtained from models of sampling efficiency for different levels of effort. Validation of abundance estimates requires extra attention to routine sampling considerations but can help fisheries biologists avoid pitfalls associated with biased data and facilitate standardized comparisons among studies that employ different sampling methods.

  5. A normative inference approach for optimal sample sizes in decisions from experience

    PubMed Central

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  6. Phase transformations in a Cu−Cr alloy induced by high pressure torsion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korneva, Anna, E-mail: a.korniewa@imim.pl; Straumal, Boris; Institut für Nanotechnologie, Karlsruher Institut für Technologie, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen

    2016-04-15

    Phase transformations induced by high pressure torsion (HPT) at room temperature in two samples of the Cu-0.86 at.% Cr alloy, pre-annealed at 550 °C and 1000 °C, were studied in order to obtain two different initial states for the HPT procedure. Observation of microstructure of the samples before HPT revealed that the sample annealed at 550 °C contained two types of Cr precipitates in the Cu matrix: large particles (size about 500 nm) and small ones (size about 70 nm). The sample annealed at 1000 °C showed only a little fraction of Cr precipitates (size about 2 μm). The subsequentmore » HPT process resulted in the partial dissolution of Cr precipitates in the first sample and dissolution of Cr precipitates with simultaneous decomposition of the supersaturated solid solution in another. However, the resulting microstructure of the samples after HPT was very similar from the standpoint of grain size, phase composition, texture analysis and hardness measurements. - Highlights: • Cu−Cr alloy with two different initial states was deformed by HPT. • Phase transformations in the deformed materials were studied. • SEM, TEM and X-ray diffraction techniques were used for microstructure analysis. • HPT leads to formation the same microstructure independent of the initial state.« less

  7. Particle size distribution of distillers dried grains with solubles (DDGS) and relationships to compositional and color properties.

    PubMed

    Liu, Keshun

    2008-11-01

    Eleven distillers dried grains with solubles (DDGS), processed from yellow corn, were collected from different ethanol processing plants in the US Midwest area. Particle size distribution (PSD) by mass of each sample was determined using a series of six selected US standard sieves: Nos. 8, 12, 18, 35, 60, and 100, and a pan. The original sample and sieve sized fractions were measured for surface color and contents of moisture, protein, oil, ash, and starch. Total carbohydrate (CHO) and total non-starch CHO were also calculated. Results show that there was a great variation in composition and color among DDGS from different plants. Surprisingly, a few DDGS samples contained unusually high amounts of residual starch (11.1-17.6%, dry matter basis, vs. about 5% of the rest), presumably resulting from modified processing methods. Particle size of DDGS varied greatly within a sample and PSD varied greatly among samples. The 11 samples had a mean value of 0.660mm for the geometric mean diameter (dgw) of particles and a mean value of 0.440mm for the geometric standard deviation (Sgw) of particle diameters by mass. The majority had a unimodal PSD, with a mode in the size class between 0.5 and 1.0mm. Although PSD and color parameters had little correlation with composition of whole DDGS samples, distribution of nutrients as well as color attributes correlated well with PSD. In sieved fractions, protein content, L and a color values negatively while contents of oil and total CHO positively correlated with particle size. It is highly feasible to fractionate DDGS for compositional enrichment based on particle size, while the extent of PSD can serve as an index for potential of DDGS fractionation. The above information should be a vital addition to quality and baseline data of DDGS.

  8. Characteristics of randomised trials on diseases in the digestive system registered in ClinicalTrials.gov: a retrospective analysis.

    PubMed

    Wildt, Signe; Krag, Aleksander; Gluud, Liselotte

    2011-01-01

    Objectives To evaluate the adequacy of reporting of protocols for randomised trials on diseases of the digestive system registered in http://ClinicalTrials.gov and the consistency between primary outcomes, secondary outcomes and sample size specified in http://ClinicalTrials.gov and published trials. Methods Randomised phase III trials on adult patients with gastrointestinal diseases registered before January 2009 in http://ClinicalTrials.gov were eligible for inclusion. From http://ClinicalTrials.gov all data elements in the database required by the International Committee of Medical Journal Editors (ICMJE) member journals were extracted. The subsequent publications for registered trials were identified. For published trials, data concerning publication date, primary and secondary endpoint, sample size, and whether the journal adhered to ICMJE principles were extracted. Differences between primary and secondary outcomes, sample size and sample size calculations data in http://ClinicalTrials.gov and in the published paper were registered. Results 105 trials were evaluated. 66 trials (63%) were published. 30% of trials were registered incorrectly after their completion date. Several data elements of the required ICMJE data list were not filled in, with missing data in 22% and 11%, respectively, of cases concerning the primary outcome measure and sample size. In 26% of the published papers, data on sample size calculations were missing and discrepancies between sample size reporting in http://ClinicalTrials.gov and published trials existed. Conclusion The quality of registration of randomised controlled trials still needs improvement.

  9. A multi-stage drop-the-losers design for multi-arm clinical trials.

    PubMed

    Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher

    2017-02-01

    Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.

  10. Analysis of YBCO high temperature superconductor doped with silver nanoparticles and carbon nanotubes using Williamson-Hall and size-strain plot

    NASA Astrophysics Data System (ADS)

    Dadras, Sedigheh; Davoudiniya, Masoumeh

    2018-05-01

    This paper sets out to investigate and compare the effects of Ag nanoparticles and carbon nanotubes (CNTs) doping on the mechanical properties of Y1Ba2Cu3O7-δ (YBCO) high temperature superconductor. For this purpose, the pure and doped YBCO samples were synthesized by sol-gel method. The microstructural analysis of the samples is performed using X-ray diffraction (XRD). The crystalline size, lattice strain and stress of the pure and doped YBCO samples were estimated by modified forms of Williamson-Hall analysis (W-H), namely, uniform deformation model (UDM), uniform deformation stress model (UDSM) and the size-strain plot method (SSP). These results show that the crystalline size, lattice strain and stress of the YBCO samples declined by Ag nanoparticles and CNTs doping.

  11. Simulation on Poisson and negative binomial models of count road accident modeling

    NASA Astrophysics Data System (ADS)

    Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.

    2016-11-01

    Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.

  12. Sample Size Estimation for Alzheimer's Disease Trials from Japanese ADNI Serial Magnetic Resonance Imaging.

    PubMed

    Fujishima, Motonobu; Kawaguchi, Atsushi; Maikusa, Norihide; Kuwano, Ryozo; Iwatsubo, Takeshi; Matsuda, Hiroshi

    2017-01-01

    Little is known about the sample sizes required for clinical trials of Alzheimer's disease (AD)-modifying treatments using atrophy measures from serial brain magnetic resonance imaging (MRI) in the Japanese population. The primary objective of the present study was to estimate how large a sample size would be needed for future clinical trials for AD-modifying treatments in Japan using atrophy measures of the brain as a surrogate biomarker. Sample sizes were estimated from the rates of change of the whole brain and hippocampus by the k-means normalized boundary shift integral (KN-BSI) and cognitive measures using the data of 537 Japanese Alzheimer's Neuroimaging Initiative (J-ADNI) participants with a linear mixed-effects model. We also examined the potential use of ApoE status as a trial enrichment strategy. The hippocampal atrophy rate required smaller sample sizes than cognitive measures of AD and mild cognitive impairment (MCI). Inclusion of ApoE status reduced sample sizes for AD and MCI patients in the atrophy measures. These results show the potential use of longitudinal hippocampal atrophy measurement using automated image analysis as a progression biomarker and ApoE status as a trial enrichment strategy in a clinical trial of AD-modifying treatment in Japanese people.

  13. Exploratory Factor Analysis with Small Sample Sizes

    ERIC Educational Resources Information Center

    de Winter, J. C. F.; Dodou, D.; Wieringa, P. A.

    2009-01-01

    Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes ("N"), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for "N" below 50. Simulations were carried out to estimate the minimum required "N" for different…

  14. An Investigation of Sample Size Splitting on ATFIND and DIMTEST

    ERIC Educational Resources Information Center

    Socha, Alan; DeMars, Christine E.

    2013-01-01

    Modeling multidimensional test data with a unidimensional model can result in serious statistical errors, such as bias in item parameter estimates. Many methods exist for assessing the dimensionality of a test. The current study focused on DIMTEST. Using simulated data, the effects of sample size splitting for use with the ATFIND procedure for…

  15. Grain size effect on the permittivity of La1.5Sr0.5NiO4 nanoparticles

    NASA Astrophysics Data System (ADS)

    Dang Thanh, Tran; Van Hong, Le

    2009-09-01

    Using the annealing at different temperatures the La1.5Sr0.5NiO4 ceramic samples with different mean grain size were manufactured. Mean grain size () of the samples was evaluated by Warren-Averbach method and their SEM images. The obtained results of both methods are almost the same, changing from 16.2 to 95 nm in dependence on the annealing temperature. The frequency dependence of dielectric constant in the frequency range of (1-13 MHz) was recorded for all samples. The real (ɛ') and the imaginary parts (ɛ") of the permittivity of La1.5Sr0.5NiO4 samples abnormally depend on the frequency, exhibiting a dielectric resonance around 500 kHz. R-L-C in series equivalent-circuit fitted well for the obtained result. It was supposed that there exists magnetic contribution in material that suggests the material is a multiferroic one. Dependence of the (ɛ') on the mean grain size supposed that the colossal dielectric property is an intrinsic behaviour of La1.5Sr0.5NiO4 material.

  16. Influences of sampling size and pattern on the uncertainty of correlation estimation between soil water content and its influencing factors

    NASA Astrophysics Data System (ADS)

    Lai, Xiaoming; Zhu, Qing; Zhou, Zhiwen; Liao, Kaihua

    2017-12-01

    In this study, seven random combination sampling strategies were applied to investigate the uncertainties in estimating the hillslope mean soil water content (SWC) and correlation coefficients between the SWC and soil/terrain properties on a tea + bamboo hillslope. One of the sampling strategies is the global random sampling and the other six are the stratified random sampling on the top, middle, toe, top + mid, top + toe and mid + toe slope positions. When each sampling strategy was applied, sample sizes were gradually reduced and each sampling size contained 3000 replicates. Under each sampling size of each sampling strategy, the relative errors (REs) and coefficients of variation (CVs) of the estimated hillslope mean SWC and correlation coefficients between the SWC and soil/terrain properties were calculated to quantify the accuracy and uncertainty. The results showed that the uncertainty of the estimations decreased as the sampling size increasing. However, larger sample sizes were required to reduce the uncertainty in correlation coefficient estimation than in hillslope mean SWC estimation. Under global random sampling, 12 randomly sampled sites on this hillslope were adequate to estimate the hillslope mean SWC with RE and CV ≤10%. However, at least 72 randomly sampled sites were needed to ensure the estimated correlation coefficients with REs and CVs ≤10%. Comparing with all sampling strategies, reducing sampling sites on the middle slope had the least influence on the estimation of hillslope mean SWC and correlation coefficients. Under this strategy, 60 sites (10 on the middle slope and 50 on the top and toe slopes) were enough to ensure the estimated correlation coefficients with REs and CVs ≤10%. This suggested that when designing the SWC sampling, the proportion of sites on the middle slope can be reduced to 16.7% of the total number of sites. Findings of this study will be useful for the optimal SWC sampling design.

  17. Thermal conductivity of graphene mediated by strain and size

    DOE PAGES

    Kuang, Youdi; Shi, Sanqiang; Wang, Xinjiang; ...

    2016-06-09

    Based on first-principles calculations and full iterative solution of the linearized Boltzmann–Peierls transport equation for phonons, we systematically investigate effects of strain, size and temperature on the thermal conductivity k of suspended graphene. The calculated size-dependent and temperature-dependent k for finite samples agree well with experimental data. The results show that, contrast to the convergent room-temperature k = 5450 W/m-K of unstrained graphene at a sample size ~8 cm, k of strained graphene diverges with increasing the sample size even at high temperature. Out-of-plane acoustic phonons are responsible for the significant size effect in unstrained and strained graphene due tomore » their ultralong mean free path and acoustic phonons with wavelength smaller than 10 nm contribute 80% to the intrinsic room temperature k of unstrained graphene. Tensile strain hardens the flexural modes and increases their lifetimes, causing interesting dependence of k on sample size and strain due to the competition between boundary scattering and intrinsic phonon–phonon scattering. k of graphene can be tuned within a large range by strain for the size larger than 500 μm. These findings shed light on the nature of thermal transport in two-dimensional materials and may guide predicting and engineering k of graphene by varying strain and size.« less

  18. Development of a copula-based particle filter (CopPF) approach for hydrologic data assimilation under consideration of parameter interdependence

    NASA Astrophysics Data System (ADS)

    Fan, Y. R.; Huang, G. H.; Baetz, B. W.; Li, Y. P.; Huang, K.

    2017-06-01

    In this study, a copula-based particle filter (CopPF) approach was developed for sequential hydrological data assimilation by considering parameter correlation structures. In CopPF, multivariate copulas are proposed to reflect parameter interdependence before the resampling procedure with new particles then being sampled from the obtained copulas. Such a process can overcome both particle degeneration and sample impoverishment. The applicability of CopPF is illustrated with three case studies using a two-parameter simplified model and two conceptual hydrologic models. The results for the simplified model indicate that model parameters are highly correlated in the data assimilation process, suggesting a demand for full description of their dependence structure. Synthetic experiments on hydrologic data assimilation indicate that CopPF can rejuvenate particle evolution in large spaces and thus achieve good performances with low sample size scenarios. The applicability of CopPF is further illustrated through two real-case studies. It is shown that, compared with traditional particle filter (PF) and particle Markov chain Monte Carlo (PMCMC) approaches, the proposed method can provide more accurate results for both deterministic and probabilistic prediction with a sample size of 100. Furthermore, the sample size would not significantly influence the performance of CopPF. Also, the copula resampling approach dominates parameter evolution in CopPF, with more than 50% of particles sampled by copulas in most sample size scenarios.

  19. An anthropometric analysis of Korean male helicopter pilots for helicopter cockpit design.

    PubMed

    Lee, Wonsup; Jung, Kihyo; Jeong, Jeongrim; Park, Jangwoon; Cho, Jayoung; Kim, Heeeun; Park, Seikwon; You, Heecheon

    2013-01-01

    This study measured 21 anthropometric dimensions (ADs) of 94 Korean male helicopter pilots in their 20s to 40s and compared them with corresponding measurements of Korean male civilians and the US Army male personnel. The ADs and the sample size of the anthropometric survey were determined by a four-step process: (1) selection of ADs related to helicopter cockpit design, (2) evaluation of the importance of each AD, (3) calculation of required sample sizes for selected precision levels and (4) determination of an appropriate sample size by considering both the AD importance evaluation results and the sample size requirements. The anthropometric comparison reveals that the Korean helicopter pilots are larger (ratio of means = 1.01-1.08) and less dispersed (ratio of standard deviations = 0.71-0.93) than the Korean male civilians and that they are shorter in stature (0.99), have shorter upper limbs (0.89-0.96) and lower limbs (0.93-0.97), but are taller on sitting height, sitting eye height and acromial height (1.01-1.03), and less dispersed (0.68-0.97) than the US Army personnel. The anthropometric characteristics of Korean male helicopter pilots were compared with those of Korean male civilians and US Army male personnel. The sample size determination process and the anthropometric comparison results presented in this study are useful to design an anthropometric survey and a helicopter cockpit layout, respectively.

  20. Vitamin D receptor gene and osteoporosis - author`s response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Looney, J.E.; Yoon, Hyun Koo; Fischer, M.

    1996-04-01

    We appreciate the comments of Dr. Nguyen et al. about our recent study, but we disagree with their suggestion that the lack of an association between low bone density and the BB VDR genotype, which we reported, is an artifact generated by the small sample size. Furthermore, our results are consistent with similar conclusions reached by a number of other investigators, as recently reported by Peacock. Peacock states {open_quotes}Taken as a whole, the results of studies outlined ... indicate that VDR alleles, cannot account for the major part of the heritable component of bone density as indicated by Morrison etmore » al.{close_quotes}. The majority of the 17 studies cited in this editorial could not confirm an association between the VDR genotype and the bone phenotype. Surely one cannot criticize this combined work as representing an artifact because of a too small sample size. We do not dispute the suggestion by Nguyen et al. that large sample sizes are required to analyze small biological effects. This is evident in both Peacock`s summary and in their own bone density studies. We did not design our study with a larger sample size because, based on the work of Morrison et al., we had hypothesized a large biological effect; large sample sizes are only needed for small biological effects. 4 refs.« less

  1. Support vector regression to predict porosity and permeability: Effect of sample size

    NASA Astrophysics Data System (ADS)

    Al-Anazi, A. F.; Gates, I. D.

    2012-02-01

    Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function type and loss functions used.

  2. A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies

    PubMed Central

    2014-01-01

    Background The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. Methods The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. Results The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. Conclusions If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used. PMID:24552686

  3. Phase Composition, Crystallite Size and Physical Properties of B2O3-added Forsterite Nano-ceramics

    NASA Astrophysics Data System (ADS)

    Pratapa, S.; Chairunnisa, A.; Nurbaiti, U.; Handoko, W. D.

    2018-05-01

    This study was aimed to know the effect of B2O3 addition on the phase composition, crystallite size and dielectric properties of forsterite (Mg2SiO4) nano-ceramics. It utilized a purified silica sand from Tanah Laut, South Kalimantan as the source of (amorphous) silica and a magnesium oxide (MgO) powder. They were thoroughly mixed and milled prior to calcination. The addition of 1, 2, 3, and 4 wt% B2O3 to the calcined powder was done before uniaxial pressing and then sintering at 950 °C for 4 h. The phase composition and forsterite crystallite size, the microstructure and the dielectric constant of the sintered samples were characterized using X-ray diffractometer (XRD), Scanning Electron Microscope (SEM) and Vector Network Analyzer (VNA), respectively. Results showed that all samples contained forsterite, periclase (MgO) and proto enstatite (MgSiO3) with different weight fractions and forsterite crystallite size. In general, the weight fraction and crystallite size of forsterite increased with increasing B2O3 addition. The weight fraction and crystallite size of forsterite in the 4%-added sample reached 99% wt and 164 nm. Furthermore, the SEM images showed that the average grain size became slightly larger and the ceramics also became slightly denser as more B2O3 was added. The results are in accordance with density measurements using the Archimedes method which showed that the 4% ceramic exhibited 1.845 g/cm3 apparent density, while the 1% ceramic 1.681 g/cm3. We also found that the higher the density, the higher the average dielectric constant, i.e. it was 4.6 for the 1%-added sample and 6.4 for the 4%-added sample.

  4. Controlled synthesis and luminescence properties of CaMoO4:Eu3+ microcrystals

    NASA Astrophysics Data System (ADS)

    Xie, Ying; Ma, Siming; Wang, Yu; Xu, Mai; Lu, Chengxi; Xiao, Linjiu; Deng, Shuguang

    2018-03-01

    Pure tetragonal-phased Ca0.9MoO4:0.1Eu3+ (CaMoO4:Eu3+) microcrystals with varying particle sizes were prepared via a co-deposition in water/oil (w/o) phase method. The particle sizes of as-prepared samples were controlled by calcination temperature and calcination time, and the crystallinity of the samples enhances with increasing particle size. The luminescence properties of CaMoO4:Eu3+ microcrystals were studied with varying particle size. The results reveal that the intensity of emission spectra of the CaMoO4:Eu3+ samples increases with increasing particle size, and they have closely correlation with each other. It is the same with the luminescence lifetime. The luminescence lifetime of the CaMoO4:Eu3+ samples decreases from 0.637 ms to 0.447 ms with increasing particle size from 0.12 μm to 1.79 μm, respectively. This study not only provides information for size-dependent luminescence properties of CaMoO4:Eu3+ but also gives a reference for potential applications in high voltage electric porcelain material.

  5. Large sample area and size are needed for forest soil seed bank studies to ensure low discrepancy with standing vegetation.

    PubMed

    Shen, You-xin; Liu, Wei-li; Li, Yu-hui; Guan, Hui-lin

    2014-01-01

    A large number of small-sized samples invariably shows that woody species are absent from forest soil seed banks, leading to a large discrepancy with the seedling bank on the forest floor. We ask: 1) Does this conventional sampling strategy limit the detection of seeds of woody species? 2) Are large sample areas and sample sizes needed for higher recovery of seeds of woody species? We collected 100 samples that were 10 cm (length) × 10 cm (width) × 10 cm (depth), referred to as larger number of small-sized samples (LNSS) in a 1 ha forest plot, and placed them to germinate in a greenhouse, and collected 30 samples that were 1 m × 1 m × 10 cm, referred to as small number of large-sized samples (SNLS) and placed them (10 each) in a nearby secondary forest, shrub land and grass land. Only 15.7% of woody plant species of the forest stand were detected by the 100 LNSS, contrasting with 22.9%, 37.3% and 20.5% woody plant species being detected by SNLS in the secondary forest, shrub land and grassland, respectively. The increased number of species vs. sampled areas confirmed power-law relationships for forest stand, the LNSS and SNLS at all three recipient sites. Our results, although based on one forest, indicate that conventional LNSS did not yield a high percentage of detection for woody species, but SNLS strategy yielded a higher percentage of detection for woody species in the seed bank if samples were exposed to a better field germination environment. A 4 m2 minimum sample area derived from power equations is larger than the sampled area in most studies in the literature. Increased sample size also is needed to obtain an increased sample area if the number of samples is to remain relatively low.

  6. Bootstrapping Results of Exercise Therapy and Education for Patients with Congestive Heart Failure

    ERIC Educational Resources Information Center

    Witta, E. Lea; Brubaker, Craig

    2003-01-01

    When studies are conducted over a period of time, the sample size typically decreases. In a study of the effects of exercise therapy and education with recovering congestive heart failure (CHF) patients (Brubaker, Witta, & Angelopoulus, 2003), the sample size decreased from over 40 to 9 participants after an 18-month time span. Although the…

  7. Inert gases in a terra sample - Measurements in six grain-size fractions and two single particles from Lunar 20.

    NASA Technical Reports Server (NTRS)

    Heymann, D.; Lakatos, S.; Walton, J. R.

    1973-01-01

    Review of the results of inert gas measurements performed on six grain-size fractions and two single particles from four samples of Luna 20 material. Presented and discussed data include the inert gas contents, element and isotope systematics, radiation ages, and Ar-36/Ar-40 systematics.

  8. Chemical Composition and Source Apportionment of Size ...

    EPA Pesticide Factsheets

    The Cleveland airshed comprises a complex mixture of industrial source emissions that contribute to periods of non-attainment for fine particulate matter (PM 2.5 ) and are associated with increased adverse health outcomes in the exposed population. Specific PM sources responsible for health effects however are not fully understood. Size-fractionated PM (coarse, fine, and ultrafine) samples were collected using a ChemVol sampler at an urban site (G.T. Craig (GTC)) and rural site (Chippewa Lake (CLM)) from July 2009 to June 2010, and then chemically analyzed. The resulting speciated PM data were apportioned by EPA positive matrix factorization to identify emission sources for each size fraction and location. For comparisons with the ChemVol results, PM samples were also collected with sequential dichotomous and passive samplers, and evaluated for source contributions to each sampling site. The ChemVol results showed that annual average concentrations of PM, elemental carbon, and inorganic elements in the coarse fraction at GTC were ~ 2, ~7, and ~3 times higher than those at CLM, respectively, while the smaller size fractions at both sites showed similar annual average concentrat ions. Seasonal variations of secondary aerosols (e.g., high N03- level in winter and high SO42- level in summer) were observed at both sites. Source apportionment results demonstrated that the PM samples at GTC and CLM were enriched with local industrial sources (e.g., steel plant and coa

  9. Outlier Removal and the Relation with Reporting Errors and Quality of Psychological Research

    PubMed Central

    Bakker, Marjan; Wicherts, Jelte M.

    2014-01-01

    Background The removal of outliers to acquire a significant result is a questionable research practice that appears to be commonly used in psychology. In this study, we investigated whether the removal of outliers in psychology papers is related to weaker evidence (against the null hypothesis of no effect), a higher prevalence of reporting errors, and smaller sample sizes in these papers compared to papers in the same journals that did not report the exclusion of outliers from the analyses. Methods and Findings We retrieved a total of 2667 statistical results of null hypothesis significance tests from 153 articles in main psychology journals, and compared results from articles in which outliers were removed (N = 92) with results from articles that reported no exclusion of outliers (N = 61). We preregistered our hypotheses and methods and analyzed the data at the level of articles. Results show no significant difference between the two types of articles in median p value, sample sizes, or prevalence of all reporting errors, large reporting errors, and reporting errors that concerned the statistical significance. However, we did find a discrepancy between the reported degrees of freedom of t tests and the reported sample size in 41% of articles that did not report removal of any data values. This suggests common failure to report data exclusions (or missingness) in psychological articles. Conclusions We failed to find that the removal of outliers from the analysis in psychological articles was related to weaker evidence (against the null hypothesis of no effect), sample size, or the prevalence of errors. However, our control sample might be contaminated due to nondisclosure of excluded values in articles that did not report exclusion of outliers. Results therefore highlight the importance of more transparent reporting of statistical analyses. PMID:25072606

  10. Optimal sample sizes for the design of reliability studies: power consideration.

    PubMed

    Shieh, Gwowen

    2014-09-01

    Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.

  11. On sample size of the kruskal-wallis test with application to a mouse peritoneal cavity study.

    PubMed

    Fan, Chunpeng; Zhang, Donghui; Zhang, Cun-Hui

    2011-03-01

    As the nonparametric generalization of the one-way analysis of variance model, the Kruskal-Wallis test applies when the goal is to test the difference between multiple samples and the underlying population distributions are nonnormal or unknown. Although the Kruskal-Wallis test has been widely used for data analysis, power and sample size methods for this test have been investigated to a much lesser extent. This article proposes new power and sample size calculation methods for the Kruskal-Wallis test based on the pilot study in either a completely nonparametric model or a semiparametric location model. No assumption is made on the shape of the underlying population distributions. Simulation results show that, in terms of sample size calculation for the Kruskal-Wallis test, the proposed methods are more reliable and preferable to some more traditional methods. A mouse peritoneal cavity study is used to demonstrate the application of the methods. © 2010, The International Biometric Society.

  12. Comparative analyses of basal rate of metabolism in mammals: data selection does matter.

    PubMed

    Genoud, Michel; Isler, Karin; Martin, Robert D

    2018-02-01

    Basal rate of metabolism (BMR) is a physiological parameter that should be measured under strictly defined experimental conditions. In comparative analyses among mammals BMR is widely used as an index of the intensity of the metabolic machinery or as a proxy for energy expenditure. Many databases with BMR values for mammals are available, but the criteria used to select metabolic data as BMR estimates have often varied and the potential effect of this variability has rarely been questioned. We provide a new, expanded BMR database reflecting compliance with standard criteria (resting, postabsorptive state; thermal neutrality; adult, non-reproductive status for females) and examine potential effects of differential selectivity on the results of comparative analyses. The database includes 1739 different entries for 817 species of mammals, compiled from the original sources. It provides information permitting assessment of the validity of each estimate and presents the value closest to a proper BMR for each entry. Using different selection criteria, several alternative data sets were extracted and used in comparative analyses of (i) the scaling of BMR to body mass and (ii) the relationship between brain mass and BMR. It was expected that results would be especially dependent on selection criteria with small sample sizes and with relatively weak relationships. Phylogenetically informed regression (phylogenetic generalized least squares, PGLS) was applied to the alternative data sets for several different clades (Mammalia, Eutheria, Metatheria, or individual orders). For Mammalia, a 'subsampling procedure' was also applied, in which random subsamples of different sample sizes were taken from each original data set and successively analysed. In each case, two data sets with identical sample size and species, but comprising BMR data with different degrees of reliability, were compared. Selection criteria had minor effects on scaling equations computed for large clades (Mammalia, Eutheria, Metatheria), although less-reliable estimates of BMR were generally about 12-20% larger than more-reliable ones. Larger effects were found with more-limited clades, such as sciuromorph rodents. For the relationship between BMR and brain mass the results of comparative analyses were found to depend strongly on the data set used, especially with more-limited, order-level clades. In fact, with small sample sizes (e.g. <100) results often appeared erratic. Subsampling revealed that sample size has a non-linear effect on the probability of a zero slope for a given relationship. Depending on the species included, results could differ dramatically, especially with small sample sizes. Overall, our findings indicate a need for due diligence when selecting BMR estimates and caution regarding results (even if seemingly significant) with small sample sizes. © 2017 Cambridge Philosophical Society.

  13. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    PubMed

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  14. An opportunity cost approach to sample size calculation in cost-effectiveness analysis.

    PubMed

    Gafni, A; Walter, S D; Birch, S; Sendi, P

    2008-01-01

    The inclusion of economic evaluations as part of clinical trials has led to concerns about the adequacy of trial sample size to support such analysis. The analytical tool of cost-effectiveness analysis is the incremental cost-effectiveness ratio (ICER), which is compared with a threshold value (lambda) as a method to determine the efficiency of a health-care intervention. Accordingly, many of the methods suggested to calculating the sample size requirements for the economic component of clinical trials are based on the properties of the ICER. However, use of the ICER and a threshold value as a basis for determining efficiency has been shown to be inconsistent with the economic concept of opportunity cost. As a result, the validity of the ICER-based approaches to sample size calculations can be challenged. Alternative methods for determining improvements in efficiency have been presented in the literature that does not depend upon ICER values. In this paper, we develop an opportunity cost approach to calculating sample size for economic evaluations alongside clinical trials, and illustrate the approach using a numerical example. We compare the sample size requirement of the opportunity cost method with the ICER threshold method. In general, either method may yield the larger required sample size. However, the opportunity cost approach, although simple to use, has additional data requirements. We believe that the additional data requirements represent a small price to pay for being able to perform an analysis consistent with both concept of opportunity cost and the problem faced by decision makers. Copyright (c) 2007 John Wiley & Sons, Ltd.

  15. Rock sampling. [method for controlling particle size distribution

    NASA Technical Reports Server (NTRS)

    Blum, P. (Inventor)

    1971-01-01

    A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

  16. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    PubMed

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  17. Some physical properties of Apollo 12 lunar samples

    NASA Technical Reports Server (NTRS)

    Gold, T.; Oleary, B. T.; Campbell, M.

    1971-01-01

    The size distribution of the lunar fines is measured, and small but significant differences are found between the Apollo 11 and 12 samples as well as among the Apollo 12 core samples. The observed differences in grain size distribtuion in the core samples are related to surface transportation processes, and the importance of a sedimentation process versus meteoritic impact gardening of the mare grounds is discussed. The optical and the radio frequency electrical properties are measured and are also found to differ only slightly from Apollo 11 results.

  18. A Bayesian Perspective on the Reproducibility Project: Psychology

    PubMed Central

    Etz, Alexander; Vandekerckhove, Joachim

    2016-01-01

    We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors—a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis—for a large subset (N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable. PMID:26919473

  19. A Bayesian Perspective on the Reproducibility Project: Psychology.

    PubMed

    Etz, Alexander; Vandekerckhove, Joachim

    2016-01-01

    We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors-a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis-for a large subset (N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable.

  20. Effects of particle size on magnetostrictive properties of magnetostrictive composites with low particulate volume fraction

    NASA Astrophysics Data System (ADS)

    Dong, Xufeng; Guan, Xinchun; Ou, Jinping

    2009-03-01

    In the past ten years, there have been several investigations on the effects of particle size on magnetostrictive properties of polymer-bonded Terfenol-D composites, but they didn't get an agreement. To solve the conflict among them, Terfenol-D/unsaturated polyester resin composite samples were prepared from Tb0.3Dy0.7Fe2 powder with 20% volume fraction in six particle-size ranges (30-53, 53-150, 150-300, 300-450, 450-500 and 30-500μm). Then their magnetostrictive properties were tested. The results indicate the 53-150μm distribution presents the largest static and dynamic magnetostriction among the five monodispersed distribution samples. But the 30-500μm (polydispersed) distribution shows even larger response than 53-150μm distribution. It indicates the particle size level plays a doubleedged sword on magnetostrictive properties of magnetostrictive composites. The existence of the optimal particle size to prepare polymer-bonded Terfenol-D, whose composition is Tb0.3Dy0.7Fe2, is resulted from the competition between the positive effects and negative effects of increasing particle size. At small particle size level, the voids and the demagnetization effect decrease significantly with increasing particle size and leads to the increase of magnetostriction; while at lager particle size level, the percentage of single-crystal particles and packing density becomes increasingly smaller with increasing particle size and results in the decrease of magnetostriction. The reason for the other scholars got different results is analyzed.

  1. Determination of sample size for higher volatile data using new framework of Box-Jenkins model with GARCH: A case study on gold price

    NASA Astrophysics Data System (ADS)

    Roslindar Yaziz, Siti; Zakaria, Roslinazairimah; Hura Ahmad, Maizah

    2017-09-01

    The model of Box-Jenkins - GARCH has been shown to be a promising tool for forecasting higher volatile time series. In this study, the framework of determining the optimal sample size using Box-Jenkins model with GARCH is proposed for practical application in analysing and forecasting higher volatile data. The proposed framework is employed to daily world gold price series from year 1971 to 2013. The data is divided into 12 different sample sizes (from 30 to 10200). Each sample is tested using different combination of the hybrid Box-Jenkins - GARCH model. Our study shows that the optimal sample size to forecast gold price using the framework of the hybrid model is 1250 data of 5-year sample. Hence, the empirical results of model selection criteria and 1-step-ahead forecasting evaluations suggest that the latest 12.25% (5-year data) of 10200 data is sufficient enough to be employed in the model of Box-Jenkins - GARCH with similar forecasting performance as by using 41-year data.

  2. Effect of flaw size and temperature on the matrix cracking behavior of a brittle ceramic matrix composite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anandakumar, U.; Webb, J.E.; Singh, R.N.

    The matrix cracking behavior of a zircon matrix - uniaxial SCS 6 fiber composite was studied as a function of initial flaw size and temperature. The composites were fabricated by a tape casting and hot pressing technique. Surface flaws of controlled size were introduced using a vicker`s indenter. The composite samples were tested in three point flexure at three different temperatures to study the non steady state and steady state matrix cracking behavior. The composite samples exhibited steady state and non steady matrix cracking behavior at all temperatures. The steady state matrix cracking stress and steady state crack size increasedmore » with increasing temperature. The results of the study correlated well with the results predicted by the matrix cracking models.« less

  3. An Integrated Tool for System Analysis of Sample Return Vehicles

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.; Maddock, Robert W.; Winski, Richard G.

    2012-01-01

    The next important step in space exploration is the return of sample materials from extraterrestrial locations to Earth for analysis. Most mission concepts that return sample material to Earth share one common element: an Earth entry vehicle. The analysis and design of entry vehicles is multidisciplinary in nature, requiring the application of mass sizing, flight mechanics, aerodynamics, aerothermodynamics, thermal analysis, structural analysis, and impact analysis tools. Integration of a multidisciplinary problem is a challenging task; the execution process and data transfer among disciplines should be automated and consistent. This paper describes an integrated analysis tool for the design and sizing of an Earth entry vehicle. The current tool includes the following disciplines: mass sizing, flight mechanics, aerodynamics, aerothermodynamics, and impact analysis tools. Python and Java languages are used for integration. Results are presented and compared with the results from previous studies.

  4. Affected States soft independent modeling by class analogy from the relation between independent variables, number of independent variables and sample size.

    PubMed

    Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, Irem Ersöz

    2013-03-01

    The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Simulation study. SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values.

  5. Rule-of-thumb adjustment of sample sizes to accommodate dropouts in a two-stage analysis of repeated measurements.

    PubMed

    Overall, John E; Tonidandel, Scott; Starbuck, Robert R

    2006-01-01

    Recent contributions to the statistical literature have provided elegant model-based solutions to the problem of estimating sample sizes for testing the significance of differences in mean rates of change across repeated measures in controlled longitudinal studies with differentially correlated error and missing data due to dropouts. However, the mathematical complexity and model specificity of these solutions make them generally inaccessible to most applied researchers who actually design and undertake treatment evaluation research in psychiatry. In contrast, this article relies on a simple two-stage analysis in which dropout-weighted slope coefficients fitted to the available repeated measurements for each subject separately serve as the dependent variable for a familiar ANCOVA test of significance for differences in mean rates of change. This article is about how a sample of size that is estimated or calculated to provide desired power for testing that hypothesis without considering dropouts can be adjusted appropriately to take dropouts into account. Empirical results support the conclusion that, whatever reasonable level of power would be provided by a given sample size in the absence of dropouts, essentially the same power can be realized in the presence of dropouts simply by adding to the original dropout-free sample size the number of subjects who would be expected to drop from a sample of that original size under conditions of the proposed study.

  6. Addressing the "Replication Crisis": Using Original Studies to Design Replication Studies with Appropriate Statistical Power.

    PubMed

    Anderson, Samantha F; Maxwell, Scott E

    2017-01-01

    Psychology is undergoing a replication crisis. The discussion surrounding this crisis has centered on mistrust of previous findings. Researchers planning replication studies often use the original study sample effect size as the basis for sample size planning. However, this strategy ignores uncertainty and publication bias in estimated effect sizes, resulting in overly optimistic calculations. A psychologist who intends to obtain power of .80 in the replication study, and performs calculations accordingly, may have an actual power lower than .80. We performed simulations to reveal the magnitude of the difference between actual and intended power based on common sample size planning strategies and assessed the performance of methods that aim to correct for effect size uncertainty and/or bias. Our results imply that even if original studies reflect actual phenomena and were conducted in the absence of questionable research practices, popular approaches to designing replication studies may result in a low success rate, especially if the original study is underpowered. Methods correcting for bias and/or uncertainty generally had higher actual power, but were not a panacea for an underpowered original study. Thus, it becomes imperative that 1) original studies are adequately powered and 2) replication studies are designed with methods that are more likely to yield the intended level of power.

  7. A USANS/SANS study of the accessibility of pores in the Barnett Shale to methane and water

    USGS Publications Warehouse

    Ruppert, Leslie F.; Sakurovs, Richard; Blach, Tomasz P.; He, Lilin; Melnichenko, Yuri B.; Mildner, David F.; Alcantar-Lopez, Leo

    2013-01-01

    Shale is an increasingly important source of natural gas in the United States. The gas is held in fine pores that need to be accessed by horizontal drilling and hydrofracturing techniques. Understanding the nature of the pores may provide clues to making gas extraction more efficient. We have investigated two Mississippian Barnett Shale samples, combining small-angle neutron scattering (SANS) and ultrasmall-angle neutron scattering (USANS) to determine the pore size distribution of the shale over the size range 10 nm to 10 μm. By adding deuterated methane (CD4) and, separately, deuterated water (D2O) to the shale, we have identified the fraction of pores that are accessible to these compounds over this size range. The total pore size distribution is essentially identical for the two samples. At pore sizes >250 nm, >85% of the pores in both samples are accessible to both CD4 and D2O. However, differences in accessibility to CD4 are observed in the smaller pore sizes (~25 nm). In one sample, CD4 penetrated the smallest pores as effectively as it did the larger ones. In the other sample, less than 70% of the smallest pores (4, but they were still largely penetrable by water, suggesting that small-scale heterogeneities in methane accessibility occur in the shale samples even though the total porosity does not differ. An additional study investigating the dependence of scattered intensity with pressure of CD4 allows for an accurate estimation of the pressure at which the scattered intensity is at a minimum. This study provides information about the composition of the material immediately surrounding the pores. Most of the accessible (open) pores in the 25 nm size range can be associated with either mineral matter or high reflectance organic material. However, a complementary scanning electron microscopy investigation shows that most of the pores in these shale samples are contained in the organic components. The neutron scattering results indicate that the pores are not equally proportioned in the different constituents within the shale. There is some indication from the SANS results that the composition of the pore-containing material varies with pore size; the pore size distribution associated with mineral matter is different from that associated with organic phases.

  8. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--QA ANALYTICAL RESULTS FOR PARTICULATE MATTER IN BLANK SAMPLES

    EPA Science Inventory

    The Particulate Matter in Blank Samples data set contains the analytical results for measurements of two particle sizes in 12 samples. Filters were pre-weighed, loaded into impactors, kept unexposed in the laboratory, unloaded and post-weighed. Positive weight gains for laborat...

  9. Visual accumulation tube for size analysis of sands

    USGS Publications Warehouse

    Colby, B.C.; Christensen, R.P.

    1956-01-01

    The visual-accumulation-tube method was developed primarily for making size analyses of the sand fractions of suspended-sediment and bed-material samples. Because the fundamental property governing the motion of a sediment particle in a fluid is believed to be its fall velocity. the analysis is designed to determine the fall-velocity-frequency distribution of the individual particles of the sample. The analysis is based on a stratified sedimentation system in which the sample is introduced at the top of a transparent settling tube containing distilled water. The procedure involves the direct visual tracing of the height of sediment accumulation in a contracted section at the bottom of the tube. A pen records the height on a moving chart. The method is simple and fast, provides a continuous and permanent record, gives highly reproducible results, and accurately determines the fall-velocity characteristics of the sample. The apparatus, procedure, results, and accuracy of the visual-accumulation-tube method for determining the sedimentation-size distribution of sands are presented in this paper.

  10. Got Power? A Systematic Review of Sample Size Adequacy in Health Professions Education Research

    ERIC Educational Resources Information Center

    Cook, David A.; Hatala, Rose

    2015-01-01

    Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011,…

  11. Properties of Gd{sub 2}O{sub 3} nanoparticles studied by hyperfine interactions and magnetization measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Correa, E. L., E-mail: eduardo.correa@usp.br; Bosch-Santos, B.; Cavalcante, F. H. M.

    2016-05-15

    The magnetic behavior of Gd{sub 2}O{sub 3} nanoparticles, produced by thermal decomposition method and subsequently annealed at different temperatures, was investigated by magnetization measurements and, at an atomic level, by perturbed γ − γ angular correlation (PAC) spectroscopy measuring hyperfine interactions at {sup 111}In({sup 111}Cd) probe nuclei. Nanoparticle structure, size and shape were characterized by X-ray diffraction (XRD) and Transmission Electron Microscopy (TEM). Magnetization measurements were carried out to characterize the paramagnetic behavior of the samples. XRD results show that all samples crystallize in the cubic-C form of the bixbyite structure with space group Ia3. TEM images showed that particlesmore » annealed at 873 K present particles with highly homogeneous sizes in the range from 5 nm to 10 nm and those annealed at 1273 K show particles with quite different sizes from 5 nm to 100 nm, with a wide size distribution. PAC and magnetization results show that samples annealed at 873 and 1273 K are paramagnetic. Magnetization measurements show no indication of blocking temperatures for all samples down to 2 K and the presence of antiferromagnetic correlations.« less

  12. A Scanning Transmission Electron Microscopy Method for Determining Manganese Composition in Welding Fume as a Function of Primary Particle Size

    PubMed Central

    Richman, Julie D.; Livi, Kenneth J.T.; Geyh, Alison S.

    2011-01-01

    Increasing evidence suggests that the physicochemical properties of inhaled nanoparticles influence the resulting toxicokinetics and toxicodynamics. This report presents a method using scanning transmission electron microscopy (STEM) to measure the Mn content throughout the primary particle size distribution of welding fume particle samples collected on filters for application in exposure and health research. Dark field images were collected to assess the primary particle size distribution and energy-dispersive X-ray and electron energy loss spectroscopy were performed for measurement of Mn composition as a function of primary particle size. A manual method incorporating imaging software was used to measure the primary particle diameter and to select an integration region for compositional analysis within primary particles throughout the size range. To explore the variation in the developed metric, the method was applied to 10 gas metal arc welding (GMAW) fume particle samples of mild steel that were collected under a variety of conditions. The range of Mn composition by particle size was −0.10 to 0.19 %/nm, where a positive estimate indicates greater relative abundance of Mn increasing with primary particle size and a negative estimate conversely indicates decreasing Mn content with size. However, the estimate was only statistically significant (p<0.05) in half of the samples (n=5), which all had a positive estimate. In the remaining samples, no significant trend was measured. Our findings indicate that the method is reproducible and that differences in the abundance of Mn by primary particle size among welding fume samples can be detected. PMID:21625364

  13. A Scanning Transmission Electron Microscopy Method for Determining Manganese Composition in Welding Fume as a Function of Primary Particle Size.

    PubMed

    Richman, Julie D; Livi, Kenneth J T; Geyh, Alison S

    2011-06-01

    Increasing evidence suggests that the physicochemical properties of inhaled nanoparticles influence the resulting toxicokinetics and toxicodynamics. This report presents a method using scanning transmission electron microscopy (STEM) to measure the Mn content throughout the primary particle size distribution of welding fume particle samples collected on filters for application in exposure and health research. Dark field images were collected to assess the primary particle size distribution and energy-dispersive X-ray and electron energy loss spectroscopy were performed for measurement of Mn composition as a function of primary particle size. A manual method incorporating imaging software was used to measure the primary particle diameter and to select an integration region for compositional analysis within primary particles throughout the size range. To explore the variation in the developed metric, the method was applied to 10 gas metal arc welding (GMAW) fume particle samples of mild steel that were collected under a variety of conditions. The range of Mn composition by particle size was -0.10 to 0.19 %/nm, where a positive estimate indicates greater relative abundance of Mn increasing with primary particle size and a negative estimate conversely indicates decreasing Mn content with size. However, the estimate was only statistically significant (p<0.05) in half of the samples (n=5), which all had a positive estimate. In the remaining samples, no significant trend was measured. Our findings indicate that the method is reproducible and that differences in the abundance of Mn by primary particle size among welding fume samples can be detected.

  14. Dual-window dual-bandwidth spectroscopic optical coherence tomography metric for qualitative scatterer size differentiation in tissues.

    PubMed

    Tay, Benjamin Chia-Meng; Chow, Tzu-Hao; Ng, Beng-Koon; Loh, Thomas Kwok-Seng

    2012-09-01

    This study investigates the autocorrelation bandwidths of dual-window (DW) optical coherence tomography (OCT) k-space scattering profile of different-sized microspheres and their correlation to scatterer size. A dual-bandwidth spectroscopic metric defined as the ratio of the 10% to 90% autocorrelation bandwidths is found to change monotonically with microsphere size and gives the best contrast enhancement for scatterer size differentiation in the resulting spectroscopic image. A simulation model supports the experimental results and revealed a tradeoff between the smallest detectable scatterer size and the maximum scatterer size in the linear range of the dual-window dual-bandwidth (DWDB) metric, which depends on the choice of the light source optical bandwidth. Spectroscopic OCT (SOCT) images of microspheres and tonsil tissue samples based on the proposed DWDB metric showed clear differentiation between different-sized scatterers as compared to those derived from conventional short-time Fourier transform metrics. The DWDB metric significantly improves the contrast in SOCT imaging and can aid the visualization and identification of dissimilar scatterer size in a sample. Potential applications include the early detection of cell nuclear changes in tissue carcinogenesis, the monitoring of healing tendons, and cell proliferation in tissue scaffolds.

  15. A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.

    PubMed

    Yu, Qingzhao; Zhu, Lin; Zhu, Han

    2017-11-01

    Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.

  16. A new estimator of the discovery probability.

    PubMed

    Favaro, Stefano; Lijoi, Antonio; Prünster, Igor

    2012-12-01

    Species sampling problems have a long history in ecological and biological studies and a number of issues, including the evaluation of species richness, the design of sampling experiments, and the estimation of rare species variety, are to be addressed. Such inferential problems have recently emerged also in genomic applications, however, exhibiting some peculiar features that make them more challenging: specifically, one has to deal with very large populations (genomic libraries) containing a huge number of distinct species (genes) and only a small portion of the library has been sampled (sequenced). These aspects motivate the Bayesian nonparametric approach we undertake, since it allows to achieve the degree of flexibility typically needed in this framework. Based on an observed sample of size n, focus will be on prediction of a key aspect of the outcome from an additional sample of size m, namely, the so-called discovery probability. In particular, conditionally on an observed basic sample of size n, we derive a novel estimator of the probability of detecting, at the (n+m+1)th observation, species that have been observed with any given frequency in the enlarged sample of size n+m. Such an estimator admits a closed-form expression that can be exactly evaluated. The result we obtain allows us to quantify both the rate at which rare species are detected and the achieved sample coverage of abundant species, as m increases. Natural applications are represented by the estimation of the probability of discovering rare genes within genomic libraries and the results are illustrated by means of two expressed sequence tags datasets. © 2012, The International Biometric Society.

  17. Field application of a multi-frequency acoustic instrument to monitor sediment for silt erosion study in Pelton turbine in Himalayan region, India

    NASA Astrophysics Data System (ADS)

    Rai, A. K.; Kumar, A.; Hies, T.; Nguyen, H. H.

    2016-11-01

    High sediment load passing through hydropower components erodes the hydraulic components resulting in loss of efficiency, interruptions in power production and downtime for repair/maintenance, especially in Himalayan regions. The size and concentration of sediment play a major role in silt erosion. The traditional process of collecting samples manually to analyse in laboratory cannot suffice the need of monitoring temporal variation in sediment properties. In this study, a multi-frequency acoustic instrument was applied at desilting chamber to monitor sediment size and concentration entering the turbine. The sediment size and concentration entering the turbine were also measured with manual samples collected twice daily. The samples collected manually were analysed in laboratory with a laser diffraction instrument for size and concentration apart from analysis by drying and filtering methods for concentration. A conductivity probe was used to calculate total dissolved solids, which was further used in results from drying method to calculate suspended solid content of the samples. The acoustic instrument was found to provide sediment concentration values similar to drying and filtering methods. However, no good match was found between mean grain size from the acoustic method with the current status of development and laser diffraction method in the first field application presented here. The future versions of the software and significant sensitivity improvements of the ultrasonic transducers are expected to increase the accuracy in the obtained results. As the instrument is able to capture the concentration and in the future most likely more accurate mean grain size of the suspended sediments, its application for monitoring silt erosion in hydropower plant shall be highly useful.

  18. Determination of Minimum Training Sample Size for Microarray-Based Cancer Outcome Prediction–An Empirical Assessment

    PubMed Central

    Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu

    2013-01-01

    The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920

  19. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

    PubMed Central

    Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.

    2014-01-01

    Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence. PMID:24694150

  20. Systematic review finds major deficiencies in sample size methodology and reporting for stepped-wedge cluster randomised trials

    PubMed Central

    Martin, James; Taljaard, Monica; Girling, Alan; Hemming, Karla

    2016-01-01

    Background Stepped-wedge cluster randomised trials (SW-CRT) are increasingly being used in health policy and services research, but unless they are conducted and reported to the highest methodological standards, they are unlikely to be useful to decision-makers. Sample size calculations for these designs require allowance for clustering, time effects and repeated measures. Methods We carried out a methodological review of SW-CRTs up to October 2014. We assessed adherence to reporting each of the 9 sample size calculation items recommended in the 2012 extension of the CONSORT statement to cluster trials. Results We identified 32 completed trials and 28 independent protocols published between 1987 and 2014. Of these, 45 (75%) reported a sample size calculation, with a median of 5.0 (IQR 2.5–6.0) of the 9 CONSORT items reported. Of those that reported a sample size calculation, the majority, 33 (73%), allowed for clustering, but just 15 (33%) allowed for time effects. There was a small increase in the proportions reporting a sample size calculation (from 64% before to 84% after publication of the CONSORT extension, p=0.07). The type of design (cohort or cross-sectional) was not reported clearly in the majority of studies, but cohort designs seemed to be most prevalent. Sample size calculations in cohort designs were particularly poor with only 3 out of 24 (13%) of these studies allowing for repeated measures. Discussion The quality of reporting of sample size items in stepped-wedge trials is suboptimal. There is an urgent need for dissemination of the appropriate guidelines for reporting and methodological development to match the proliferation of the use of this design in practice. Time effects and repeated measures should be considered in all SW-CRT power calculations, and there should be clarity in reporting trials as cohort or cross-sectional designs. PMID:26846897

  1. Dependence of flux-flow critical frequencies and generalized bundle sizes on distance of fluxoid traversal and fluxoid length in foil samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, J.D.; Joiner, W.C.H.

    1979-10-01

    Flux-flow noise power spectra taken on Pb/sub 80/In/sub 20/ foils as a function of the orientation of the magnetic field with respect to the sample surfaces are used to study changes in frequencies and bundle sizes as distances of fluxoid traversal and fluxoid lengths change. The results obtained for the frequency dependence of the noise spectra are entirely consistent with our model for flux motion interrupted by pinning centers, provided one makes the reasonable assumption that the distance between pinning centers which a fluxoid may encounter scales inversely with the fluxoid length. The importance of pinning centers in determining themore » noise characteristics is also demonstrated by the way in which subpulse distributions and generalized bundle sizes are altered by changes in the metallurgical structure of the sample. In unannealed samples the dependence of bundle size on magnetic field orientation is controlled by a structural anisotropy, and we find a correlation between large bundle size and the absence of short subpulse times. Annealing removes this anisotropy, and we find a stronger angular variation of bundle size than would be expected using present simplified models.« less

  2. Inertial impaction air sampling device

    DOEpatents

    Dewhurst, Katharine H.

    1990-01-01

    An inertial impactor to be used in an air sampling device for collection of respirable size particles in ambient air which may include a graphite furnace as the impaction substrate in a small-size, portable, direct analysis structure that gives immediate results and is totally self-contained allowing for remote and/or personal sampling. The graphite furnace collects suspended particles transported through the housing by means of the air flow system, and these particles may be analyzed for elements, quantitatively and qualitatively, by atomic absorption spectrophotometry.

  3. Inertial impaction air sampling device

    DOEpatents

    Dewhurst, K.H.

    1987-12-10

    An inertial impactor to be used in an air sampling device for collection of respirable size particles in ambient air which may include a graphite furnace as the impaction substrate in a small-size, portable, direct analysis structure that gives immediate results and is totally self-contained allowing for remote and/or personal sampling. The graphite furnace collects suspended particles transported through the housing by means of the air flow system, and these particles may be analyzed for elements, quantitatively and qualitatively, by atomic absorption spectrophotometry. 3 figs.

  4. Porosity of the Marcellus Shale: A contrast matching small-angle neutron scattering study

    USGS Publications Warehouse

    Bahadur, Jitendra; Ruppert, Leslie F.; Pipich, Vitaliy; Sakurovs, Richard; Melnichenko, Yuri B.

    2018-01-01

    Neutron scattering techniques were used to determine the effect of mineral matter on the accessibility of water and toluene to pores in the Devonian Marcellus Shale. Three Marcellus Shale samples, representing quartz-rich, clay-rich, and carbonate-rich facies, were examined using contrast matching small-angle neutron scattering (CM-SANS) at ambient pressure and temperature. Contrast matching compositions of H2O, D2O and toluene, deuterated toluene were used to probe open and closed pores of these three shale samples. Results show that although the mean pore radius was approximately the same for all three samples, the fractal dimension of the quartz-rich sample was higher than for the clay-rich and carbonate-rich samples, indicating different pore size distributions among the samples. The number density of pores was highest in the clay-rich sample and lowest in the quartz-rich sample. Contrast matching with water and toluene mixtures shows that the accessibility of pores to water and toluene also varied among the samples. In general, water accessed approximately 70–80% of the larger pores (>80 nm radius) in all three samples. At smaller pore sizes (~5–80 nm radius), the fraction of accessible pores decreases. The lowest accessibility to both fluids is at pore throat size of ~25 nm radii with the quartz-rich sample exhibiting lower accessibility than the clay- and carbonate-rich samples. The mechanism for this behaviour is unclear, but because the mineralogy of the three samples varies, it is likely that the inaccessible pores in this size range are associated with organics and not a specific mineral within the samples. At even smaller pore sizes (~<2.5 nm radius), in all samples, the fraction of accessible pores to water increases again to approximately 70–80%. Accessibility to toluene generally follows that of water; however, in the smallest pores (~<2.5 nm radius), accessibility to toluene decreases, especially in the clay-rich sample which contains about 30% more closed pores than the quartz- and carbonate-rich samples. Results from this study show that mineralogy of producing intervals within a shale reservoir can affect accessibility of pores to water and toluene and these mineralogic differences may affect hydrocarbon storage and production and hydraulic fracturing characteristics

  5. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    PubMed

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  6. Sampling designs for contaminant temporal trend analyses using sedentary species exemplified by the snails Bellamya aeruginosa and Viviparus viviparus.

    PubMed

    Yin, Ge; Danielsson, Sara; Dahlberg, Anna-Karin; Zhou, Yihui; Qiu, Yanling; Nyberg, Elisabeth; Bignert, Anders

    2017-10-01

    Environmental monitoring typically assumes samples and sampling activities to be representative of the population being studied. Given a limited budget, an appropriate sampling strategy is essential to support detecting temporal trends of contaminants. In the present study, based on real chemical analysis data on polybrominated diphenyl ethers in snails collected from five subsites in Tianmu Lake, computer simulation is performed to evaluate three sampling strategies by the estimation of required sample size, to reach a detection of an annual change of 5% with a statistical power of 80% and 90% with a significant level of 5%. The results showed that sampling from an arbitrarily selected sampling spot is the worst strategy, requiring much more individual analyses to achieve the above mentioned criteria compared with the other two approaches. A fixed sampling site requires the lowest sample size but may not be representative for the intended study object e.g. a lake and is also sensitive to changes of that particular sampling site. In contrast, sampling at multiple sites along the shore each year, and using pooled samples when the cost to collect and prepare individual specimens are much lower than the cost for chemical analysis, would be the most robust and cost efficient strategy in the long run. Using statistical power as criterion, the results demonstrated quantitatively the consequences of various sampling strategies, and could guide users with respect of required sample sizes depending on sampling design for long term monitoring programs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Effect of carbon source on the morphology and electrochemical performances of LiFePO4/C nanocomposites.

    PubMed

    Liu, Shuxin; Wang, Haibin; Yin, Hengbo; Wang, Hong; He, Jichuan

    2014-03-01

    The carbon coated LiFePO4 (LiFePO4/C) nanocomposites materials were successfully synthesized by sol-gel method. The microstructure and morphology of LiFePO4/C nanocomposites were characterized by X-ray diffraction, Raman spectroscopy and scanning electron microscopy. The results showed that the carbon layers decomposed by different dispersant and carbon source had different graphitization degree, and the sugar could decompose to form more graphite-like structure carbon. The carbon source and heat-treatment temperature had some effect on the particle size and morphology, the sample LFP-S700 synthesized by adding sugar as carbon source at 700 degrees C had smaller particle size, uniform size distribution and spherical shape. The electrochemical behavior of LiFePO4/C nanocomposites was analyzed using galvanostatic measurements and cyclic voltammetry (CV). The results showed that the sample LFP-S700 had higher discharge specific capacities, higher apparent lithium ion diffusion coefficient and lower charge transfer resistance. The excellent electrochemical performance of sample LFP-S700 could be attributed to its high graphitization degree of carbon, smaller particle size and uniform size distribution.

  8. Meta-analysis of multiple outcomes: a multilevel approach.

    PubMed

    Van den Noortgate, Wim; López-López, José Antonio; Marín-Martínez, Fulgencio; Sánchez-Meca, Julio

    2015-12-01

    In meta-analysis, dependent effect sizes are very common. An example is where in one or more studies the effect of an intervention is evaluated on multiple outcome variables for the same sample of participants. In this paper, we evaluate a three-level meta-analytic model to account for this kind of dependence, extending the simulation results of Van den Noortgate, López-López, Marín-Martínez, and Sánchez-Meca Behavior Research Methods, 45, 576-594 (2013) by allowing for a variation in the number of effect sizes per study, in the between-study variance, in the correlations between pairs of outcomes, and in the sample size of the studies. At the same time, we explore the performance of the approach if the outcomes used in a study can be regarded as a random sample from a population of outcomes. We conclude that although this approach is relatively simple and does not require prior estimates of the sampling covariances between effect sizes, it gives appropriate mean effect size estimates, standard error estimates, and confidence interval coverage proportions in a variety of realistic situations.

  9. Methods for Determining Particle Size Distributions from Nuclear Detonations.

    DTIC Science & Technology

    1987-03-01

    Debris . . . 30 IV. Summary of Sample Preparation Method . . . . 35 V. Set Parameters for PCS ... ........... 39 VI. Analysis by Vendors...54 XV. Results From Brookhaven Analysis Using The Method of Cumulants ... ........... . 54 XVI. Results From Brookhaven Analysis of Sample...R-3 Using Histogram Method ......... .55 XVII. Results From Brookhaven Analysis of Sample R-8 Using Histogram Method ........... 56 XVIII.TEM Particle

  10. Emission characteristics and chemical components of size-segregated particulate matter in iron and steel industry

    NASA Astrophysics Data System (ADS)

    Jia, Jia; Cheng, Shuiyuan; Yao, Sen; Xu, Tiebing; Zhang, Tingting; Ma, Yuetao; Wang, Hongliang; Duan, Wenjiao

    2018-06-01

    As one of the highest energy consumption and pollution industries, the iron and steel industry is regarded as a most important source of particulate matter emission. In this study, chemical components of size-segregated particulate matters (PM) emitted from different manufacturing units in iron and steel industry were sampled by a comprehensive sampling system. Results showed that the average particle mass concentration was highest in sintering process, followed by puddling, steelmaking and then rolling processes. PM samples were divided into eight size fractions for testing the chemical components, SO42- and NH4+ distributed more into fine particles while most of the Ca2+ was concentrated in coarse particles, the size distribution of mineral elements depended on the raw materials applied. Moreover, local database with PM chemical source profiles of iron and steel industry were built and applied in CMAQ modeling for simulating SO42- and NO3- concentration, results showed that the accuracy of model simulation improved with local chemical source profiles compared to the SPECIATE database. The results gained from this study are expected to be helpful to understand the components of PM in iron and steel industry and contribute to the source apportionment researches.

  11. Infrared reflectance spectra: Effects of particle size, provenance and preparation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, Yin-Fong; Myers, Tanya L.; Brauer, Carolyn S.

    2014-09-22

    We have recently developed methods for making more accurate infrared total and diffuse directional - hemispherical reflectance measurements using an integrating sphere. We have found that reflectance spectra of solids, especially powders, are influenced by a number of factors including the sample preparation method, the particle size and morphology, as well as the sample origin. On a quantitative basis we have investigated some of these parameters and the effects they have on reflectance spectra, particularly in the longwave infrared. In the IR the spectral features may be observed as either maxima or minima: In general, upward-going peaks in the reflectancemore » spectrum result from strong surface scattering, i.e. rays that are reflected from the surface without bulk penetration, whereas downward-going peaks are due to either absorption or volume scattering, i.e. rays that have penetrated or refracted into the sample interior and are not reflected. The light signals reflected from solids usually encompass all such effects, but with strong dependencies on particle size and preparation. This paper measures the reflectance spectra in the 1.3 – 16 micron range for various bulk materials that have a combination of strong and weak absorption bands in order to observe the effects on the spectral features: Bulk materials were ground with a mortar and pestle and sieved to separate the samples into various size fractions between 5 and 500 microns. The median particle size is demonstrated to have large effects on the reflectance spectra. For certain minerals we also observe significant spectral change depending on the geologic origin of the sample. All three such effects (particle size, preparation and provenance) result in substantial change in the reflectance spectra for solid materials; successful identification algorithms will require sufficient flexibility to account for these parameters.« less

  12. Infrared reflectance spectra: effects of particle size, provenance and preparation

    NASA Astrophysics Data System (ADS)

    Su, Yin-Fong; Myers, Tanya L.; Brauer, Carolyn S.; Blake, Thomas A.; Forland, Brenda M.; Szecsody, J. E.; Johnson, Timothy J.

    2014-10-01

    We have recently developed methods for making more accurate infrared total and diffuse directional - hemispherical reflectance measurements using an integrating sphere. We have found that reflectance spectra of solids, especially powders, are influenced by a number of factors including the sample preparation method, the particle size and morphology, as well as the sample origin. On a quantitative basis we have investigated some of these parameters and the effects they have on reflectance spectra, particularly in the longwave infrared. In the IR the spectral features may be observed as either maxima or minima: In general, upward-going peaks in the reflectance spectrum result from strong surface scattering, i.e. rays that are reflected from the surface without bulk penetration, whereas downward-going peaks are due to either absorption or volume scattering, i.e. rays that have penetrated or refracted into the sample interior and are not reflected. The light signals reflected from solids usually encompass all such effects, but with strong dependencies on particle size and preparation. This paper measures the reflectance spectra in the 1.3 - 16 micron range for various bulk materials that have a combination of strong and weak absorption bands in order to observe the effects on the spectral features: Bulk materials were ground with a mortar and pestle and sieved to separate the samples into various size fractions between 5 and 500 microns. The median particle size is demonstrated to have large effects on the reflectance spectra. For certain minerals we also observe significant spectral change depending on the geologic origin of the sample. All three such effects (particle size, preparation and provenance) result in substantial change in the reflectance spectra for solid materials; successful identification algorithms will require sufficient flexibility to account for these parameters.

  13. Hindlimb muscle architecture in non-human great apes and a comparison of methods for analysing inter-species variation

    PubMed Central

    Myatt, Julia P; Crompton, Robin H; Thorpe, Susannah K S

    2011-01-01

    By relating an animal's morphology to its functional role and the behaviours performed, we can further develop our understanding of the selective factors and constraints acting on the adaptations of great apes. Comparison of muscle architecture between different ape species, however, is difficult because only small sample sizes are ever available. Further, such samples are often comprised of different age–sex classes, so studies have to rely on scaling techniques to remove body mass differences. However, the reliability of such scaling techniques has been questioned. As datasets increase in size, more reliable statistical analysis may eventually become possible. Here we employ geometric and allometric scaling techniques, and ancovas (a form of general linear model, GLM) to highlight and explore the different methods available for comparing functional morphology in the non-human great apes. Our results underline the importance of regressing data against a suitable body size variable to ascertain the relationship (geometric or allometric) and of choosing appropriate exponents by which to scale data. ancova models, while likely to be more robust than scaling for species comparisons when sample sizes are high, suffer from reduced power when sample sizes are low. Therefore, until sample sizes are radically increased it is preferable to include scaling analyses along with ancovas in data exploration. Overall, the results obtained from the different methods show little significant variation, whether in muscle belly mass, fascicle length or physiological cross-sectional area between the different species. This may reflect relatively close evolutionary relationships of the non-human great apes; a universal influence on morphology of generalised orthograde locomotor behaviours or, quite likely, both. PMID:21507000

  14. Estimation After a Group Sequential Trial.

    PubMed

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.

  15. Size effect on atomic structure in low-dimensional Cu-Zr amorphous systems.

    PubMed

    Zhang, W B; Liu, J; Lu, S H; Zhang, H; Wang, H; Wang, X D; Cao, Q P; Zhang, D X; Jiang, J Z

    2017-08-04

    The size effect on atomic structure of a Cu 64 Zr 36 amorphous system, including zero-dimensional small-size amorphous particles (SSAPs) and two-dimensional small-size amorphous films (SSAFs) together with bulk sample was investigated by molecular dynamics simulations. We revealed that sample size strongly affects local atomic structure in both Cu 64 Zr 36 SSAPs and SSAFs, which are composed of core and shell (surface) components. Compared with core component, the shell component of SSAPs has lower average coordination number and average bond length, higher degree of ordering, and lower packing density due to the segregation of Cu atoms on the shell of Cu 64 Zr 36 SSAPs. These atomic structure differences in SSAPs with various sizes result in different glass transition temperatures, in which the glass transition temperature for the shell component is found to be 577 K, which is much lower than 910 K for the core component. We further extended the size effect on the structure and glasses transition temperature to Cu 64 Zr 36 SSAFs, and revealed that the T g decreases when SSAFs becomes thinner due to the following factors: different dynamic motion (mean square displacement), different density of core and surface and Cu segregation on the surface of SSAFs. The obtained results here are different from the results for the size effect on atomic structure of nanometer-sized crystalline metallic alloys.

  16. [A comparison of convenience sampling and purposive sampling].

    PubMed

    Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien

    2014-06-01

    Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.

  17. Field substitution of nonresponders can maintain sample size and structure without altering survey estimates-the experience of the Italian behavioral risk factors surveillance system (PASSI).

    PubMed

    Baldissera, Sandro; Ferrante, Gianluigi; Quarchioni, Elisa; Minardi, Valentina; Possenti, Valentina; Carrozzi, Giuliano; Masocco, Maria; Salmaso, Stefania

    2014-04-01

    Field substitution of nonrespondents can be used to maintain the planned sample size and structure in surveys but may introduce additional bias. Sample weighting is suggested as the preferable alternative; however, limited empirical evidence exists comparing the two methods. We wanted to assess the impact of substitution on surveillance results using data from Progressi delle Aziende Sanitarie per la Salute in Italia-Progress by Local Health Units towards a Healthier Italy (PASSI). PASSI is conducted by Local Health Units (LHUs) through telephone interviews of stratified random samples of residents. Nonrespondents are replaced with substitutes randomly preselected in the same LHU stratum. We compared the weighted estimates obtained in the original PASSI sample (used as a reference) and in the substitutes' sample. The differences were evaluated using a Wald test. In 2011, 50,697 units were selected: 37,252 were from the original sample and 13,445 were substitutes; 37,162 persons were interviewed. The initially planned size and demographic composition were restored. No significant differences in the estimates between the original and the substitutes' sample were found. In our experience, field substitution is an acceptable method for dealing with nonresponse, maintaining the characteristics of the original sample without affecting the results. This evidence can support appropriate decisions about planning and implementing a surveillance system. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Ratio of Cut Surface Area to Leaf Sample Volume for Water Potential Measurements by Thermocouple Psychrometers

    PubMed Central

    Walker, Sue; Oosterhuis, Derrick M.; Wiebe, Herman H.

    1984-01-01

    Evaporative losses from the cut edge of leaf samples are of considerable importance in measurements of leaf water potential using thermocouple psychrometers. The ratio of cut surface area to leaf sample volume (area to volume ratio) has been used to give an estimate of possible effects of evaporative loss in relation to sample size. A wide range of sample sizes with different area to volume ratios has been used. Our results using Glycine max L. Merr. cv Bragg indicate that leaf samples with area to volume values less than 0.2 square millimeter per cubic millimeter give psychrometric leaf water potential measurements that compare favorably with pressure chamber measurements. PMID:16663578

  19. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

    PubMed

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

  20. Concurrent measurements of size-segregated particulate sulfate, nitrate and ammonium using quartz fiber filters, glass fiber filters and cellulose membranes

    NASA Astrophysics Data System (ADS)

    Tian, Shili; Pan, Yuepeng; Wang, Jian; Wang, Yuesi

    2016-11-01

    Current science and policy requirements have focused attention on the need to expand and improve particulate matter (PM) sampling methods. To explore how sampling filter type affects artifacts in PM composition measurements, size-resolved particulate SO42-, NO3- and NH4+ (SNA) were measured on quartz fiber filters (QFF), glass fiber filters (GFF) and cellulose membranes (CM) concurrently in an urban area of Beijing on both clean and hazy days. The results showed that SNA concentrations in most of the size fractions exhibited the following patterns on different filters: CM > QFF > GFF for NH4+; GFF > QFF > CM for SO42-; and GFF > CM > QFF for NO3-. The different patterns in coarse particles were mainly affected by filter acidity, and that in fine particles were mainly affected by hygroscopicity of the filters (especially in size fraction of 0.65-2.1 μm). Filter acidity and hygroscopicity also shifted the peaks of the annual mean size distributions of SNA on QFF from 0.43-0.65 μm on clean days to 0.65-1.1 μm on hazy days. However, this size shift was not as distinct for samples measured with CM and GFF. In addition, relative humidity (RH) and pollution levels are important factors that can enhance particulate size mode shifts of SNA on clean and hazy days. Consequently, the annual mean size distributions of SNA had maxima at 0.65-1.1 μm for QFF samples and 0.43-0.65 μm for GFF and CM samples. Compared with NH4+ and SO42-, NO3- is more sensitive to RH and pollution levels, accordingly, the annual mean size distribution of NO3- exhibited peak at 0.65-1.1 μm for CM samples instead of 0.43-0.65 μm. These methodological uncertainties should be considered when quantifying the concentrations and size distributions of SNA under different RH and haze conditions.

  1. Characterization of the Particle Size and Polydispersity of Dicumarol Using Solid-State NMR Spectroscopy.

    PubMed

    Dempah, Kassibla Elodie; Lubach, Joseph W; Munson, Eric J

    2017-03-06

    A variety of particle sizes of a model compound, dicumarol, were prepared and characterized in order to investigate the correlation between particle size and solid-state NMR (SSNMR) proton spin-lattice relaxation ( 1 H T 1 ) times. Conventional laser diffraction and scanning electron microscopy were used as particle size measurement techniques and showed crystalline dicumarol samples with sizes ranging from tens of micrometers to a few micrometers. Dicumarol samples were prepared using both bottom-up and top-down particle size control approaches, via antisolvent microprecipitation and cryogrinding. It was observed that smaller particles of dicumarol generally had shorter 1 H T 1 times than larger ones. Additionally, cryomilled particles had the shortest 1 H T 1 times encountered (8 s). SSNMR 1 H T 1 times of all the samples were measured and showed as-received dicumarol to have a T 1 of 1500 s, whereas the 1 H T 1 times of the precipitated samples ranged from 20 to 80 s, with no apparent change in the physical form of dicumarol. Physical mixtures of different sized particles were also analyzed to determine the effect of sample inhomogeneity on 1 H T 1 values. Mixtures of cryoground and as-received dicumarol were clearly inhomogeneous as they did not fit well to a one-component relaxation model, but could be fit much better to a two-component model with both fast-and slow-relaxing regimes. Results indicate that samples of crystalline dicumarol containing two significantly different particle size populations could be deconvoluted solely based on their differences in 1 H T 1 times. Relative populations of each particle size regime could also be approximated using two-component fitting models. Using NMR theory on spin diffusion as a reference, and taking into account the presence of crystal defects, a model for the correlation between the particle size of dicumarol and its 1 H T 1 time was proposed.

  2. Big Data and Large Sample Size: A Cautionary Note on the Potential for Bias

    PubMed Central

    Chambers, David A.; Glasgow, Russell E.

    2014-01-01

    Abstract A number of commentaries have suggested that large studies are more reliable than smaller studies and there is a growing interest in the analysis of “big data” that integrates information from many thousands of persons and/or different data sources. We consider a variety of biases that are likely in the era of big data, including sampling error, measurement error, multiple comparisons errors, aggregation error, and errors associated with the systematic exclusion of information. Using examples from epidemiology, health services research, studies on determinants of health, and clinical trials, we conclude that it is necessary to exercise greater caution to be sure that big sample size does not lead to big inferential errors. Despite the advantages of big studies, large sample size can magnify the bias associated with error resulting from sampling or study design. Clin Trans Sci 2014; Volume #: 1–5 PMID:25043853

  3. Evaluation of species richness estimators based on quantitative performance measures and sensitivity to patchiness and sample grain size

    NASA Astrophysics Data System (ADS)

    Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc

    2012-11-01

    Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling curves before explaining differences in diversity.

  4. Particle size and surface area effects on the thin-pulse shock initiation of Diaminoazoxyfurazan (DAAF)

    NASA Astrophysics Data System (ADS)

    Burritt, Rosemary; Francois, Elizabeth; Windler, Gary; Chavez, David

    2017-06-01

    Diaminoazoxyfurazan (DAAF) has many of the safety characteristics of an insensitive high explosive (IHE): it is extremely insensitive to impact and friction and is comparable to triaminotrinitrobezene (TATB) in this way. Conversely, it demonstrates many performance characteristics of a Conventional High Explosive (CHE). DAAF has a small failure diameter of about 1.25 mm and can be sensitive to shock under the right conditions. Large particle sized DAAF will not initiate in a typical exploding foil initiator (EFI) configuration but smaller particle sizes will. Large particle sized DAAF, of 40 μm, was crash precipitated and ball milled into six distinct samples and pressed into pellets with a density of 1.60 g/cc (91% TMD). To investigate the effect of particle size and surface area on the direct initiation on DAAF multiple threshold tests were preformed on each sample of DAAF in different EFI configurations, which varied in flyer thickness and/or bridge size. Comparative tests were performed examining threshold voltage and correlated to Photon Doppler Velocimetry (PDV) results. The samples with larger particle sizes and surface area required more energy to initiate while the smaller particle sizes required less energy and could be initiated with smaller diameter flyers.

  5. A feasibility study in adapting Shamos Bickel and Hodges Lehman estimator into T-Method for normalization

    NASA Astrophysics Data System (ADS)

    Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan

    2018-03-01

    T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.

  6. Chemical composition and source apportionment of size fractionated particulate matter in Cleveland, Ohio, USA.

    PubMed

    Kim, Yong Ho; Krantz, Q Todd; McGee, John; Kovalcik, Kasey D; Duvall, Rachelle M; Willis, Robert D; Kamal, Ali S; Landis, Matthew S; Norris, Gary A; Gilmour, M Ian

    2016-11-01

    The Cleveland airshed comprises a complex mixture of industrial source emissions that contribute to periods of non-attainment for fine particulate matter (PM 2.5 ) and are associated with increased adverse health outcomes in the exposed population. Specific PM sources responsible for health effects however are not fully understood. Size-fractionated PM (coarse, fine, and ultrafine) samples were collected using a ChemVol sampler at an urban site (G.T. Craig (GTC)) and rural site (Chippewa Lake (CLM)) from July 2009 to June 2010, and then chemically analyzed. The resulting speciated PM data were apportioned by EPA positive matrix factorization to identify emission sources for each size fraction and location. For comparisons with the ChemVol results, PM samples were also collected with sequential dichotomous and passive samplers, and evaluated for source contributions to each sampling site. The ChemVol results showed that annual average concentrations of PM, elemental carbon, and inorganic elements in the coarse fraction at GTC were ∼2, ∼7, and ∼3 times higher than those at CLM, respectively, while the smaller size fractions at both sites showed similar annual average concentrations. Seasonal variations of secondary aerosols (e.g., high NO 3 - level in winter and high SO 4 2- level in summer) were observed at both sites. Source apportionment results demonstrated that the PM samples at GTC and CLM were enriched with local industrial sources (e.g., steel plant and coal-fired power plant) but their contributions were influenced by meteorological conditions and the emission source's operation conditions. Taken together the year-long PM collection and data analysis provides valuable insights into the characteristics and sources of PM impacting the Cleveland airshed in both the urban center and the rural upwind background locations. These data will be used to classify the PM samples for toxicology studies to determine which PM sources, species, and size fractions are of greatest health concern. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Metallographic Characterization of Wrought Depleted Uranium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forsyth, Robert Thomas; Hill, Mary Ann

    Metallographic characterization was performed on wrought depleted uranium (DU) samples taken from the longitudinal and transverse orientations from specific locations on two specimens. Characterization of the samples included general microstructure, inclusion analysis, grain size analysis, and microhardness testing. Comparisons of the characterization results were made to determine any differences based on specimen, sample orientation, or sample location. In addition, the characterization results for the wrought DU samples were also compared with data obtained from the metallographic characterization of cast DU samples previously characterized. No differences were observed in microstructure, inclusion size, morphology, and distribution, or grain size in regard tomore » specimen, location, or orientation for the wrought depleted uranium samples. However, a small difference was observed in average hardness with regard to orientation at the same locations within the same specimen. The longitudinal samples were slightly harder than the transverse samples from the same location of the same specimen. This was true for both wrought DU specimens. Comparing the wrought DU sample data with the previously characterized cast DU sample data, distinct differences in microstructure, inclusion size, morphology and distribution, grain size, and microhardness were observed. As expected, the microstructure of the wrought DU samples consisted of small recrystallized grains which were uniform, randomly oriented, and equiaxed with minimal twinning observed in only a few grains. In contrast, the cast DU microstructure consisted of large irregularly shaped grains with extensive twinning observed in most grains. Inclusions in the wrought DU samples were elongated, broken and cracked and light and dark phases were observed in some inclusions. The mean inclusion area percentage for the wrought DU samples ranged from 0.08% to 0.34% and the average density from all wrought DU samples was 1.62E+04/cm 2. Inclusions in the cast DU samples were equiaxed and intact with light and dark phases observed in some inclusions. The mean inclusion area percentage for the cast DU samples ranged from 0.93% to 1.00% and the average density from all wrought DU samples was 2.83E+04/cm 2. The average mean grain area from all wrought DU samples was 141 μm 2 while the average mean grain area from all cast DU samples was 1.7 mm2. The average Knoop microhardness from all wrought DU samples was 215 HK and the average Knoop microhardness from all cast DU samples was 264 HK.« less

  8. Application of SAXS and SANS in evaluation of porosity, pore size distribution and surface area of coal

    USGS Publications Warehouse

    Radlinski, A.P.; Mastalerz, Maria; Hinde, A.L.; Hainbuchner, M.; Rauch, H.; Baron, M.; Lin, J.S.; Fan, L.; Thiyagarajan, P.

    2004-01-01

    This paper discusses the applicability of small angle X-ray scattering (SAXS) and small angle neutron scattering (SANS) techniques for determining the porosity, pore size distribution and internal specific surface area in coals. The method is noninvasive, fast, inexpensive and does not require complex sample preparation. It uses coal grains of about 0.8 mm size mounted in standard pellets as used for petrographic studies. Assuming spherical pore geometry, the scattering data are converted into the pore size distribution in the size range 1 nm (10 A??) to 20 ??m (200,000 A??) in diameter, accounting for both open and closed pores. FTIR as well as SAXS and SANS data for seven samples of oriented whole coals and corresponding pellets with vitrinite reflectance (Ro) values in the range 0.55% to 5.15% are presented and analyzed. Our results demonstrate that pellets adequately represent the average microstructure of coal samples. The scattering data have been used to calculate the maximum surface area available for methane adsorption. Total porosity as percentage of sample volume is calculated and compared with worldwide trends. By demonstrating the applicability of SAXS and SANS techniques to determine the porosity, pore size distribution and surface area in coals, we provide a new and efficient tool, which can be used for any type of coal sample, from a thin slice to a representative sample of a thick seam. ?? 2004 Elsevier B.V. All rights reserved.

  9. Validation of fixed sample size plans for monitoring lepidopteran pests of Brassica oleracea crops in North Korea.

    PubMed

    Hamilton, A J; Waters, E K; Kim, H J; Pak, W S; Furlong, M J

    2009-06-01

    The combined action of two lepidoteran pests, Plutella xylostella L. (Plutellidae) and Pieris rapae L. (Pieridae),causes significant yield losses in cabbage (Brassica oleracea variety capitata) crops in the Democratic People's Republic of Korea. Integrated pest management (IPM) strategies for these cropping systems are in their infancy, and sampling plans have not yet been developed. We used statistical resampling to assess the performance of fixed sample size plans (ranging from 10 to 50 plants). First, the precision (D = SE/mean) of the plans in estimating the population mean was assessed. There was substantial variation in achieved D for all sample sizes, and sample sizes of at least 20 and 45 plants were required to achieve the acceptable precision level of D < or = 0.3 at least 50 and 75% of the time, respectively. Second, the performance of the plans in classifying the population density relative to an economic threshold (ET) was assessed. To account for the different damage potentials of the two species the ETs were defined in terms of standard insects (SIs), where 1 SI = 1 P. rapae = 5 P. xylostella larvae. The plans were implemented using different economic thresholds (ETs) for the three growth stages of the crop: precupping (1 SI/plant), cupping (0.5 SI/plant), and heading (4 SI/plant). Improvement in the classification certainty with increasing sample sizes could be seen through the increasing steepness of operating characteristic curves. Rather than prescribe a particular plan, we suggest that the results of these analyses be used to inform practitioners of the relative merits of the different sample sizes.

  10. How conservative is Fisher's exact test? A quantitative evaluation of the two-sample comparative binomial trial.

    PubMed

    Crans, Gerald G; Shuster, Jonathan J

    2008-08-15

    The debate as to which statistical methodology is most appropriate for the analysis of the two-sample comparative binomial trial has persisted for decades. Practitioners who favor the conditional methods of Fisher, Fisher's exact test (FET), claim that only experimental outcomes containing the same amount of information should be considered when performing analyses. Hence, the total number of successes should be fixed at its observed level in hypothetical repetitions of the experiment. Using conditional methods in clinical settings can pose interpretation difficulties, since results are derived using conditional sample spaces rather than the set of all possible outcomes. Perhaps more importantly from a clinical trial design perspective, this test can be too conservative, resulting in greater resource requirements and more subjects exposed to an experimental treatment. The actual significance level attained by FET (the size of the test) has not been reported in the statistical literature. Berger (J. R. Statist. Soc. D (The Statistician) 2001; 50:79-85) proposed assessing the conservativeness of conditional methods using p-value confidence intervals. In this paper we develop a numerical algorithm that calculates the size of FET for sample sizes, n, up to 125 per group at the two-sided significance level, alpha = 0.05. Additionally, this numerical method is used to define new significance levels alpha(*) = alpha+epsilon, where epsilon is a small positive number, for each n, such that the size of the test is as close as possible to the pre-specified alpha (0.05 for the current work) without exceeding it. Lastly, a sample size and power calculation example are presented, which demonstrates the statistical advantages of implementing the adjustment to FET (using alpha(*) instead of alpha) in the two-sample comparative binomial trial. 2008 John Wiley & Sons, Ltd

  11. DESCARTES' RULE OF SIGNS AND THE IDENTIFIABILITY OF POPULATION DEMOGRAPHIC MODELS FROM GENOMIC VARIATION DATA.

    PubMed

    Bhaskar, Anand; Song, Yun S

    2014-01-01

    The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the "folded" SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes' rule of signs for polynomials to the Laplace transform of piecewise continuous functions.

  12. DESCARTES’ RULE OF SIGNS AND THE IDENTIFIABILITY OF POPULATION DEMOGRAPHIC MODELS FROM GENOMIC VARIATION DATA1

    PubMed Central

    Bhaskar, Anand; Song, Yun S.

    2016-01-01

    The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the “folded” SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes’ rule of signs for polynomials to the Laplace transform of piecewise continuous functions. PMID:28018011

  13. Sample size allocation for food item radiation monitoring and safety inspection.

    PubMed

    Seto, Mayumi; Uriu, Koichiro

    2015-03-01

    The objective of this study is to identify a procedure for determining sample size allocation for food radiation inspections of more than one food item to minimize the potential risk to consumers of internal radiation exposure. We consider a simplified case of food radiation monitoring and safety inspection in which a risk manager is required to monitor two food items, milk and spinach, in a contaminated area. Three protocols for food radiation monitoring with different sample size allocations were assessed by simulating random sampling and inspections of milk and spinach in a conceptual monitoring site. Distributions of (131)I and radiocesium concentrations were determined in reference to (131)I and radiocesium concentrations detected in Fukushima prefecture, Japan, for March and April 2011. The results of the simulations suggested that a protocol that allocates sample size to milk and spinach based on the estimation of (131)I and radiocesium concentrations using the apparent decay rate constants sequentially calculated from past monitoring data can most effectively minimize the potential risks of internal radiation exposure. © 2014 Society for Risk Analysis.

  14. The effects of neutralized particles on the sampling efficiency of polyurethane foam used to estimate the extrathoracic deposition fraction.

    PubMed

    Tomyn, Ronald L; Sleeth, Darrah K; Thiese, Matthew S; Larson, Rodney R

    2016-01-01

    In addition to chemical composition, the site of deposition of inhaled particles is important for determining the potential health effects from an exposure. As a result, the International Organization for Standardization adopted a particle deposition sampling convention. This includes extrathoracic particle deposition sampling conventions for the anterior nasal passages (ET1) and the posterior nasal and oral passages (ET2). This study assessed how well a polyurethane foam insert placed in an Institute of Occupational Medicine (IOM) sampler can match an extrathoracic deposition sampling convention, while accounting for possible static buildup in the test particles. In this way, the study aimed to assess whether neutralized particles affected the performance of this sampler for estimating extrathoracic particle deposition. A total of three different particle sizes (4.9, 9.5, and 12.8 µm) were used. For each trial, one particle size was introduced into a low-speed wind tunnel with a wind speed set a 0.2 m/s (∼40 ft/min). This wind speed was chosen to closely match the conditions of most indoor working environments. Each particle size was tested twice either neutralized, using a high voltage neutralizer, or left in its normal (non neutralized) state as standard particles. IOM samplers were fitted with a polyurethane foam insert and placed on a rotating mannequin inside the wind tunnel. Foam sampling efficiencies were calculated for all trials to compare against the normalized ET1 sampling deposition convention. The foam sampling efficiencies matched well to the ET1 deposition convention for the larger particle sizes, but had a general trend of underestimating for all three particle sizes. The results of a Wilcoxon Rank Sum Test also showed that only at 4.9 µm was there a statistically significant difference (p-value = 0.03) between the foam sampling efficiency using the standard particles and the neutralized particles. This is interpreted to mean that static buildup may be occurring and neutralizing the particles that are 4.9 µm diameter in size did affect the performance of the foam sampler when estimating extrathoracic particle deposition.

  15. Size variation in early human mandibles and molars from Klasies River, South Africa: comparison with other middle and late Pleistocene assemblages and with modern humans.

    PubMed

    Royer, Danielle F; Lockwood, Charles A; Scott, Jeremiah E; Grine, Frederick E

    2009-10-01

    Previous studies of the Middle Stone Age human remains from Klasies River have concluded that they exhibited more sexual dimorphism than extant populations, but these claims have not been assessed statistically. We evaluate these claims by comparing size variation in the best-represented elements at the site, namely the mandibular corpora and M(2)s, to that in samples from three recent human populations using resampling methods. We also examine size variation in these same elements from seven additional middle and late Pleistocene sites: Skhūl, Dolní Vestonice, Sima de los Huesos, Arago, Krapina, Shanidar, and Vindija. Our results demonstrate that size variation in the Klasies assemblage was greater than in recent humans, consistent with arguments that the Klasies people were more dimorphic than living humans. Variation in the Skhūl, Dolní Vestonice, and Sima de los Huesos mandibular samples is also higher than in the recent human samples, indicating that the Klasies sample was not unusual among middle and late Pleistocene hominins. In contrast, the Neandertal samples (Krapina, Shanidar, and Vindija) do not evince relatively high mandibular and molar variation, which may indicate that the level of dimorphism in Neandertals was similar to that observed in extant humans. These results suggest that the reduced levels of dimorphism in Neandertals and living humans may have developed independently, though larger fossil samples are needed to test this hypothesis.

  16. Is the permeability of naturally fractured rocks scale dependent?

    NASA Astrophysics Data System (ADS)

    Azizmohammadi, Siroos; Matthäi, Stephan K.

    2017-09-01

    The equivalent permeability, keq of stratified fractured porous rocks and its anisotropy is important for hydrocarbon reservoir engineering, groundwater hydrology, and subsurface contaminant transport. However, it is difficult to constrain this tensor property as it is strongly influenced by infrequent large fractures. Boreholes miss them and their directional sampling bias affects the collected geostatistical data. Samples taken at any scale smaller than that of interest truncate distributions and this bias leads to an incorrect characterization and property upscaling. To better understand this sampling problem, we have investigated a collection of outcrop-data-based Discrete Fracture and Matrix (DFM) models with mechanically constrained fracture aperture distributions, trying to establish a useful Representative Elementary Volume (REV). Finite-element analysis and flow-based upscaling have been used to determine keq eigenvalues and anisotropy. While our results indicate a convergence toward a scale-invariant keq REV with increasing sample size, keq magnitude can have multi-modal distributions. REV size relates to the length of dilated fracture segments as opposed to overall fracture length. Tensor orientation and degree of anisotropy also converge with sample size. However, the REV for keq anisotropy is larger than that for keq magnitude. Across scales, tensor orientation varies spatially, reflecting inhomogeneity of the fracture patterns. Inhomogeneity is particularly pronounced where the ambient stress selectively activates late- as opposed to early (through-going) fractures. While we cannot detect any increase of keq with sample size as postulated in some earlier studies, our results highlight a strong keq anisotropy that influences scale dependence.

  17. Crack identification and evolution law in the vibration failure process of loaded coal

    NASA Astrophysics Data System (ADS)

    Li, Chengwu; Ai, Dihao; Sun, Xiaoyuan; Xie, Beijing

    2017-08-01

    To study the characteristics of coal cracks produced in the vibration failure process, we set up a static load and static and dynamic combination load failure test simulation system, prepared with different particle size, formation pressure, and firmness coefficient coal samples. Through static load damage testing of coal samples and then dynamic load (vibration exciter) and static (jack) combination destructive testing, the crack images of coal samples under the load condition were obtained. Combined with digital image processing technology, an algorithm of crack identification with high precision and in real-time is proposed. With the crack features of the coal samples under different load conditions as the research object, we analyzed the distribution of cracks on the surface of the coal samples and the factors influencing crack evolution using the proposed algorithm and a high-resolution industrial camera. Experimental results showed that the major portion of the crack after excitation is located in the rear of the coal sample where the vibration exciter cannot act. Under the same disturbance conditions, crack size and particle size exhibit a positive correlation, while crack size and formation pressure exhibit a negative correlation. Soft coal is more likely to lead to crack evolution than hard coal, and more easily causes instability failure. The experimental results and crack identification algorithm provide a solid basis for the prevention and control of instability and failure of coal and rock mass, and they are helpful in improving the monitoring method of coal and rock dynamic disasters.

  18. Bayesian sample size calculations in phase II clinical trials using a mixture of informative priors.

    PubMed

    Gajewski, Byron J; Mayo, Matthew S

    2006-08-15

    A number of researchers have discussed phase II clinical trials from a Bayesian perspective. A recent article by Mayo and Gajewski focuses on sample size calculations, which they determine by specifying an informative prior distribution and then calculating a posterior probability that the true response will exceed a prespecified target. In this article, we extend these sample size calculations to include a mixture of informative prior distributions. The mixture comes from several sources of information. For example consider information from two (or more) clinicians. The first clinician is pessimistic about the drug and the second clinician is optimistic. We tabulate the results for sample size design using the fact that the simple mixture of Betas is a conjugate family for the Beta- Binomial model. We discuss the theoretical framework for these types of Bayesian designs and show that the Bayesian designs in this paper approximate this theoretical framework. Copyright 2006 John Wiley & Sons, Ltd.

  19. A LDR-PCR approach for multiplex polymorphisms genotyping of severely degraded DNA with fragment sizes <100 bp.

    PubMed

    Zhang, Zhen; Wang, Bao-Jie; Guan, Hong-Yu; Pang, Hao; Xuan, Jin-Feng

    2009-11-01

    Reducing amplicon sizes has become a major strategy for analyzing degraded DNA typical of forensic samples. However, amplicon sizes in current mini-short tandem repeat-polymerase chain reaction (PCR) and mini-sequencing assays are still not suitable for analysis of severely degraded DNA. In this study, we present a multiplex typing method that couples ligase detection reaction with PCR that can be used to identify single nucleotide polymorphisms and small-scale insertion/deletions in a sample of severely fragmented DNA. This method adopts thermostable ligation for allele discrimination and subsequent PCR for signal enhancement. In this study, four polymorphic loci were used to assess the ability of this technique to discriminate alleles in an artificially degraded sample of DNA with fragment sizes <100 bp. Our results showed clear allelic discrimination of single or multiple loci, suggesting that this method might aid in the analysis of extremely degraded samples in which allelic drop out of larger fragments is observed.

  20. How accurate is the Pearson r-from-Z approximation? A Monte Carlo simulation study.

    PubMed

    Hittner, James B; May, Kim

    2012-01-01

    The Pearson r-from-Z approximation estimates the sample correlation (as an effect size measure) from the ratio of two quantities: the standard normal deviate equivalent (Z-score) corresponding to a one-tailed p-value divided by the square root of the total (pooled) sample size. The formula has utility in meta-analytic work when reports of research contain minimal statistical information. Although simple to implement, the accuracy of the Pearson r-from-Z approximation has not been empirically evaluated. To address this omission, we performed a series of Monte Carlo simulations. Results indicated that in some cases the formula did accurately estimate the sample correlation. However, when sample size was very small (N = 10) and effect sizes were small to small-moderate (ds of 0.1 and 0.3), the Pearson r-from-Z approximation was very inaccurate. Detailed figures that provide guidance as to when the Pearson r-from-Z formula will likely yield valid inferences are presented.

  1. Automated system measuring triple oxygen and nitrogen isotope ratios in nitrate using the bacterial method and N2 O decomposition by microwave discharge.

    PubMed

    Hattori, Shohei; Savarino, Joel; Kamezaki, Kazuki; Ishino, Sakiko; Dyckmans, Jens; Fujinawa, Tamaki; Caillon, Nicolas; Barbero, Albane; Mukotaka, Arata; Toyoda, Sakae; Well, Reinhard; Yoshida, Naohiro

    2016-12-30

    Triple oxygen and nitrogen isotope ratios in nitrate are powerful tools for assessing atmospheric nitrate formation pathways and their contribution to ecosystems. N 2 O decomposition using microwave-induced plasma (MIP) has been used only for measurements of oxygen isotopes to date, but it is also possible to measure nitrogen isotopes during the same analytical run. The main improvements to a previous system are (i) an automated distribution system of nitrate to the bacterial medium, (ii) N 2 O separation by gas chromatography before N 2 O decomposition using the MIP, (iii) use of a corundum tube for microwave discharge, and (iv) development of an automated system for isotopic measurements. Three nitrate standards with sample sizes of 60, 80, 100, and 120 nmol were measured to investigate the sample size dependence of the isotope measurements. The δ 17 O, δ 18 O, and Δ 17 O values increased with increasing sample size, although the δ 15 N value showed no significant size dependency. Different calibration slopes and intercepts were obtained with different sample amounts. The slopes and intercepts for the regression lines in different sample amounts were dependent on sample size, indicating that the extent of oxygen exchange is also dependent on sample size. The sample-size-dependent slopes and intercepts were fitted using natural log (ln) regression curves, and the slopes and intercepts can be estimated to apply to any sample size corrections. When using 100 nmol samples, the standard deviations of residuals from the regression lines for this system were 0.5‰, 0.3‰, and 0.1‰, respectively, for the δ 18 O, Δ 17 O, and δ 15 N values, results that are not inferior to those from other systems using gold tube or gold wire. An automated system was developed to measure triple oxygen and nitrogen isotopes in nitrate using N 2 O decomposition by MIP. This system enables us to measure both triple oxygen and nitrogen isotopes in nitrate with comparable precision and sample throughput (23 min per sample on average), and minimal manual treatment. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. On sample size and different interpretations of snow stability datasets

    NASA Astrophysics Data System (ADS)

    Schirmer, M.; Mitterer, C.; Schweizer, J.

    2009-04-01

    Interpretations of snow stability variations need an assessment of the stability itself, independent of the scale investigated in the study. Studies on stability variations at a regional scale have often chosen stability tests such as the Rutschblock test or combinations of various tests in order to detect differences in aspect and elevation. The question arose: ‘how capable are such stability interpretations in drawing conclusions'. There are at least three possible errors sources: (i) the variance of the stability test itself; (ii) the stability variance at an underlying slope scale, and (iii) that the stability interpretation might not be directly related to the probability of skier triggering. Various stability interpretations have been proposed in the past that provide partly different results. We compared a subjective one based on expert knowledge with a more objective one based on a measure derived from comparing skier-triggered slopes vs. slopes that have been skied but not triggered. In this study, the uncertainties are discussed and their effects on regional scale stability variations will be quantified in a pragmatic way. An existing dataset with very large sample sizes was revisited. This dataset contained the variance of stability at a regional scale for several situations. The stability in this dataset was determined using the subjective interpretation scheme based on expert knowledge. The question to be answered was how many measurements were needed to obtain similar results (mainly stability differences in aspect or elevation) as with the complete dataset. The optimal sample size was obtained in several ways: (i) assuming a nominal data scale the sample size was determined with a given test, significance level and power, and by calculating the mean and standard deviation of the complete dataset. With this method it can also be determined if the complete dataset consists of an appropriate sample size. (ii) Smaller subsets were created with similar aspect distributions to the large dataset. We used 100 different subsets for each sample size. Statistical variations obtained in the complete dataset were also tested on the smaller subsets using the Mann-Whitney or the Kruskal-Wallis test. For each subset size, the number of subsets were counted in which the significance level was reached. For these tests no nominal data scale was assumed. (iii) For the same subsets described above, the distribution of the aspect median was determined. A count of how often this distribution was substantially different from the distribution obtained with the complete dataset was made. Since two valid stability interpretations were available (an objective and a subjective interpretation as described above), the effect of the arbitrary choice of the interpretation on spatial variability results was tested. In over one third of the cases the two interpretations came to different results. The effect of these differences were studied in a similar method as described in (iii): the distribution of the aspect median was determined for subsets of the complete dataset using both interpretations, compared against each other as well as to the results of the complete dataset. For the complete dataset the two interpretations showed mainly identical results. Therefore the subset size was determined from the point at which the results of the two interpretations converged. A universal result for the optimal subset size cannot be presented since results differed between different situations contained in the dataset. The optimal subset size is thus dependent on stability variation in a given situation, which is unknown initially. There are indications that for some situations even the complete dataset might be not large enough. At a subset size of approximately 25, the significant differences between aspect groups (as determined using the whole dataset) were only obtained in one out of five situations. In some situations, up to 20% of the subsets showed a substantially different distribution of the aspect median. Thus, in most cases, 25 measurements (which can be achieved by six two-person teams in one day) did not allow to draw reliable conclusions.

  3. Surface-sediment grain-size distribution and sediment transport in the subaqueous Mekong Delta, Vietnam

    NASA Astrophysics Data System (ADS)

    Nguyen, T. T.; Stattegger, K.; Nittrouer, C.; Phung, P. V.; Liu, P.; DeMaster, D. J.; Bui, D. V.; Le, A. D.; Nguyen, T. N.

    2016-02-01

    Collected surface-sediment samples in coastal water around Mekong Delta (from distributary channels to Ca Mau Peninsula) were analyzed to determine surface-sediment grain-size distribution and sediment-transport trend in the subaqueous Mekong Delta. The grain-size data set of 238 samples was obtained by using the laser instrument Mastersizer 2000 and LS Particle Size Analyzer. Fourteen samples were selected for geochemical analysis (total-organic and carbonate content). These geochemical results were used to assist in interpreting variations of granulometricparamenters along the cross-shore transects. Nine transects were examined from CungHau river mouth to Ca Mau Peninsula and six thematic maps on the whole study area were made. The research results indicate that: (1) generally, the sediment becomes finer from the delta front downwards to prodelta and becomes coarser again and poorer sorted on the adjacent inner shelf due to different sources of sediment; (2) sediment-granulometry parameters vary among sedimentary sub-environments of the underwater part of Mekong Delta, the distance from sediment source and hydrodynamic regime controlling each region; (3) the net sediment transport is southwest toward the Ca Mau Peninsula.

  4. A simple method for the analysis of particle sizes of forage and total mixed rations.

    PubMed

    Lammers, B P; Buckmaster, D R; Heinrichs, A J

    1996-05-01

    A simple separator was developed to determine the particle sizes of forage and TMR that allows for easy separation of wet forage into three fractions and also allows plotting of the particle size distribution. The device was designed to mimic the laboratory-scale separator for forage particle sizes that was specified by Standard S424 of the American Society of Agricultural Engineers. A comparison of results using the standard device and the newly developed separator indicated no difference in ability to predict fractions of particles with maximum length of less than 8 and 19 mm. The separator requires a small quantity of sample (1.4 L) and is manually operated. The materials on the screens and bottom pan were weighed to obtain the cumulative percentage of sample that was undersize for the two fractions. The results were then plotted using the Weibull distribution, which proved to be the best fit for the data. Convenience samples of haycrop silage, corn silage, and TMR from farms in the northeastern US were analyzed using the forage and TMR separator, and the range of observed values are given.

  5. Violation of the Sphericity Assumption and Its Effect on Type-I Error Rates in Repeated Measures ANOVA and Multi-Level Linear Models (MLM).

    PubMed

    Haverkamp, Nicolas; Beauducel, André

    2017-01-01

    We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes ( n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The proportionality of bias and number of measurement occasions should be considered when MLM-UN is used. The good news is that this proportionality can be compensated by means of large sample sizes. Accordingly, MLM-UN can be recommended even for small sample sizes for about three measurement occasions and for large sample sizes for about nine measurement occasions.

  6. Mixing problems in using indicators for measuring regional blood flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ushioda, E.; Nuwayhid, B.; Tabsh, K.

    A basic requirement for using indicators for measuring blood flow is adequate mixing of the indicator with blood prior to sampling the site. This requirement has been met by depositing the indicator in the heart and sampling from an artery. Recently, authors have injected microspheres into veins and sampled from venous sites. The present studies were designed to investigate the mixing problems in sheep and rabbits by means of Cardio-Green and labeled microspheres. The indicators were injected at different points in the circulatory system, and blood was sampled at different levels of the venous and arterial systems. Results show themore » following: (a) When an indicator of small molecular size (Cardio-Green) is allowed to pass through the heart chambers, adequate mixing is achieved, yielding accurate and reproducible results. (b) When any indicator (Cardio-Green or microspheres) is injected into veins, and sampling is done at any point in the venous system, mixing is inadequate, yielding flow results which are inconsistent and erratic. (c) For an indicator or large molecular size (microspheres), injecting into the left side of the heart and sampling from arterial sites yield accurate and reproducible results regardless of whether blood is sampled continuously or intermittently.« less

  7. Robust gene selection methods using weighting schemes for microarray data analysis.

    PubMed

    Kang, Suyeon; Song, Jongwoo

    2017-09-02

    A common task in microarray data analysis is to identify informative genes that are differentially expressed between two different states. Owing to the high-dimensional nature of microarray data, identification of significant genes has been essential in analyzing the data. However, the performances of many gene selection techniques are highly dependent on the experimental conditions, such as the presence of measurement error or a limited number of sample replicates. We have proposed new filter-based gene selection techniques, by applying a simple modification to significance analysis of microarrays (SAM). To prove the effectiveness of the proposed method, we considered a series of synthetic datasets with different noise levels and sample sizes along with two real datasets. The following findings were made. First, our proposed methods outperform conventional methods for all simulation set-ups. In particular, our methods are much better when the given data are noisy and sample size is small. They showed relatively robust performance regardless of noise level and sample size, whereas the performance of SAM became significantly worse as the noise level became high or sample size decreased. When sufficient sample replicates were available, SAM and our methods showed similar performance. Finally, our proposed methods are competitive with traditional methods in classification tasks for microarrays. The results of simulation study and real data analysis have demonstrated that our proposed methods are effective for detecting significant genes and classification tasks, especially when the given data are noisy or have few sample replicates. By employing weighting schemes, we can obtain robust and reliable results for microarray data analysis.

  8. Beyond Gorilla and Pongo: alternative models for evaluating variation and sexual dimorphism in fossil hominoid samples.

    PubMed

    Scott, Jeremiah E; Schrein, Caitlin M; Kelley, Jay

    2009-10-01

    Sexual size dimorphism in the postcanine dentition of the late Miocene hominoid Lufengpithecus lufengensis exceeds that in Pongo pygmaeus, demonstrating that the maximum degree of molar size dimorphism in apes is not represented among the extant Hominoidea. It has not been established, however, that the molars of Pongo are more dimorphic than those of any other living primate. In this study, we used resampling-based methods to compare molar dimorphism in Gorilla, Pongo, and Lufengpithecus to that in the papionin Mandrillus leucophaeus to test two hypotheses: (1) Pongo possesses the most size-dimorphic molars among living primates and (2) molar size dimorphism in Lufengpithecus is greater than that in the most dimorphic living primates. Our results show that M. leucophaeus exceeds great apes in its overall level of dimorphism and that L. lufengensis is more dimorphic than the extant species. Using these samples, we also evaluated molar dimorphism and taxonomic composition in two other Miocene ape samples--Ouranopithecus macedoniensis from Greece, specimens of which can be sexed based on associated canines and P(3)s, and the Sivapithecus sample from Haritalyangar, India. Ouranopithecus is more dimorphic than the extant taxa but is similar to Lufengpithecus, demonstrating that the level of molar dimorphism required for the Greek fossil sample under the single-species taxonomy is not unprecedented when the comparative framework is expanded to include extinct primates. In contrast, the Haritalyangar Sivapithecus sample, if itrepresents a single species, exhibits substantially greater molar dimorphism than does Lufengpithecus. Given these results, the taxonomic status of this sample remains equivocal.

  9. Monitoring landscape metrics by point sampling: accuracy in estimating Shannon's diversity and edge density.

    PubMed

    Ramezani, Habib; Holm, Sören; Allard, Anna; Ståhl, Göran

    2010-05-01

    Environmental monitoring of landscapes is of increasing interest. To quantify landscape patterns, a number of metrics are used, of which Shannon's diversity, edge length, and density are studied here. As an alternative to complete mapping, point sampling was applied to estimate the metrics for already mapped landscapes selected from the National Inventory of Landscapes in Sweden (NILS). Monte-Carlo simulation was applied to study the performance of different designs. Random and systematic samplings were applied for four sample sizes and five buffer widths. The latter feature was relevant for edge length, since length was estimated through the number of points falling in buffer areas around edges. In addition, two landscape complexities were tested by applying two classification schemes with seven or 20 land cover classes to the NILS data. As expected, the root mean square error (RMSE) of the estimators decreased with increasing sample size. The estimators of both metrics were slightly biased, but the bias of Shannon's diversity estimator was shown to decrease when sample size increased. In the edge length case, an increasing buffer width resulted in larger bias due to the increased impact of boundary conditions; this effect was shown to be independent of sample size. However, we also developed adjusted estimators that eliminate the bias of the edge length estimator. The rates of decrease of RMSE with increasing sample size and buffer width were quantified by a regression model. Finally, indicative cost-accuracy relationships were derived showing that point sampling could be a competitive alternative to complete wall-to-wall mapping.

  10. Is it appropriate to composite fish samples for mercury trend monitoring and consumption advisories?

    PubMed

    Gandhi, Nilima; Bhavsar, Satyendra P; Gewurtz, Sarah B; Drouillard, Ken G; Arhonditsis, George B; Petro, Steve

    2016-03-01

    Monitoring mercury levels in fish can be costly because variation by space, time, and fish type/size needs to be captured. Here, we explored if compositing fish samples to decrease analytical costs would reduce the effectiveness of the monitoring objectives. Six compositing methods were evaluated by applying them to an existing extensive dataset, and examining their performance in reproducing the fish consumption advisories and temporal trends. The methods resulted in varying amount (average 34-72%) of reductions in samples, but all (except one) reproduced advisories very well (96-97% of the advisories did not change or were one category more restrictive compared to analysis of individual samples). Similarly, the methods performed reasonably well in recreating temporal trends, especially when longer-term and frequent measurements were considered. The results indicate that compositing samples within 5cm fish size bins or retaining the largest/smallest individuals and compositing in-between samples in batches of 5 with decreasing fish size would be the best approaches. Based on the literature, the findings from this study are applicable to fillet, muscle plug and whole fish mercury monitoring studies. The compositing methods may also be suitable for monitoring Persistent Organic Pollutants (POPs) in fish. Overall, compositing fish samples for mercury monitoring could result in a substantial savings (approximately 60% of the analytical cost) and should be considered in fish mercury monitoring, especially in long-term programs or when study cost is a concern. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  11. The effects of sample size on population genomic analyses--implications for the tests of neutrality.

    PubMed

    Subramanian, Sankar

    2016-02-20

    One of the fundamental measures of molecular genetic variation is the Watterson's estimator (θ), which is based on the number of segregating sites. The estimation of θ is unbiased only under neutrality and constant population growth. It is well known that the estimation of θ is biased when these assumptions are violated. However, the effects of sample size in modulating the bias was not well appreciated. We examined this issue in detail based on large-scale exome data and robust simulations. Our investigation revealed that sample size appreciably influences θ estimation and this effect was much higher for constrained genomic regions than that of neutral regions. For instance, θ estimated for synonymous sites using 512 human exomes was 1.9 times higher than that obtained using 16 exomes. However, this difference was 2.5 times for the nonsynonymous sites of the same data. We observed a positive correlation between the rate of increase in θ estimates (with respect to the sample size) and the magnitude of selection pressure. For example, θ estimated for the nonsynonymous sites of highly constrained genes (dN/dS < 0.1) using 512 exomes was 3.6 times higher than that estimated using 16 exomes. In contrast this difference was only 2 times for the less constrained genes (dN/dS > 0.9). The results of this study reveal the extent of underestimation owing to small sample sizes and thus emphasize the importance of sample size in estimating a number of population genomic parameters. Our results have serious implications for neutrality tests such as Tajima D, Fu-Li D and those based on the McDonald and Kreitman test: Neutrality Index and the fraction of adaptive substitutions. For instance, use of 16 exomes produced 2.4 times higher proportion of adaptive substitutions compared to that obtained using 512 exomes (24% vs 10 %).

  12. Mesoporous carbon synthesized from different pore sizes of SBA-15 for high density electrode supercapacitor application

    NASA Astrophysics Data System (ADS)

    Jamil, Farinaa Md; Sulaiman, Mohd Ali; Ibrahim, Suhaina Mohd; Masrom, Abdul Kadir; Yahya, Muhd Zu Azhan

    2017-12-01

    A series of mesoporous carbon sample was synthesized using silica template, SBA-15 with two different pore sizes. Impregnation method was applied using glucose as a precursor for converting it into carbon. An appropriate carbonization and silica removal process were carried out to produce a series of mesoporous carbon with different pore sizes and surface areas. Mesoporous carbon sample was then assembled as electrode and its performance was tested using cyclic voltammetry and impedance spectroscopy to study the effect of ion transportation into several pore sizes on electric double layer capacitor (EDLC) system. 6M KOH was used as electrolyte at various scan rates of 10, 20, 30 and 50 mVs-1. The results showed that the pore size of carbon increased as the pore size of template increased and the specific capacitance improved as the increasing of the pore size of carbon.

  13. Laboratory analyses of micron-sized solid grains: Experimental techniques and recent results

    NASA Technical Reports Server (NTRS)

    Colangeli, L.; Bussoletti, E.; Blanco, A.; Borghesi, A.; Fonti, S.; Orofino, V.; Schwehm, G.

    1989-01-01

    Morphological and spectrophotometric investigations have been extensively applied in the past years to various kinds of micron and/or submicron-sized grains formed by materials which are candidate to be present in space. The samples are produced in the laboratory and then characterized in their physio-chemical properties. Some of the most recent results obtained on various kinds of carbonaceous materials are reported. Main attention is devoted to spectroscopic results in the VUV and IR wavelength ranges, where many of the analyzed samples show typical fingerprints which can be identified also in astrophysical and cometary materials. The laboratory methodologies used so far are also critically discussed in order to point out capabilities and present limitations, in the view of possible application to returned comet samples. Suggestions are given to develop new techniques which should overcome some of the problems faced in the manipulation and analysis of micron solid samples.

  14. A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies.

    PubMed

    Kottas, Martina; Kuss, Oliver; Zapf, Antonia

    2014-02-19

    The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used.

  15. Robustness of methods for blinded sample size re-estimation with overdispersed count data.

    PubMed

    Schneider, Simon; Schmidli, Heinz; Friede, Tim

    2013-09-20

    Counts of events are increasingly common as primary endpoints in randomized clinical trials. With between-patient heterogeneity leading to variances in excess of the mean (referred to as overdispersion), statistical models reflecting this heterogeneity by mixtures of Poisson distributions are frequently employed. Sample size calculation in the planning of such trials requires knowledge on the nuisance parameters, that is, the control (or overall) event rate and the overdispersion parameter. Usually, there is only little prior knowledge regarding these parameters in the design phase resulting in considerable uncertainty regarding the sample size. In this situation internal pilot studies have been found very useful and very recently several blinded procedures for sample size re-estimation have been proposed for overdispersed count data, one of which is based on an EM-algorithm. In this paper we investigate the EM-algorithm based procedure with respect to aspects of their implementation by studying the algorithm's dependence on the choice of convergence criterion and find that the procedure is sensitive to the choice of the stopping criterion in scenarios relevant to clinical practice. We also compare the EM-based procedure to other competing procedures regarding their operating characteristics such as sample size distribution and power. Furthermore, the robustness of these procedures to deviations from the model assumptions is explored. We find that some of the procedures are robust to at least moderate deviations. The results are illustrated using data from the US National Heart, Lung and Blood Institute sponsored Asymptomatic Cardiac Ischemia Pilot study. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Improved ASTM G72 Test Method for Ensuring Adequate Fuel-to-Oxidizer Ratios

    NASA Technical Reports Server (NTRS)

    Juarez, Alfredo; Harper, Susana A.

    2016-01-01

    The ASTM G72/G72M-15 Standard Test Method for Autogenous Ignition Temperature of Liquids and Solids in a High-Pressure Oxygen-Enriched Environment is currently used to evaluate materials for the ignition susceptibility driven by exposure to external heat in an enriched oxygen environment. Testing performed on highly volatile liquids such as cleaning solvents has proven problematic due to inconsistent test results (non-ignitions). Non-ignition results can be misinterpreted as favorable oxygen compatibility, although they are more likely associated with inadequate fuel-to-oxidizer ratios. Forced evaporation during purging and inadequate sample size were identified as two potential causes for inadequate available sample material during testing. In an effort to maintain adequate fuel-to-oxidizer ratios within the reaction vessel during test, several parameters were considered, including sample size, pretest sample chilling, pretest purging, and test pressure. Tests on a variety of solvents exhibiting a range of volatilities are presented in this paper. A proposed improvement to the standard test protocol as a result of this evaluation is also presented. Execution of the final proposed improved test protocol outlines an incremental step method of determining optimal conditions using increased sample sizes while considering test system safety limits. The proposed improved test method increases confidence in results obtained by utilizing the ASTM G72 autogenous ignition temperature test method and can aid in the oxygen compatibility assessment of highly volatile liquids and other conditions that may lead to false non-ignition results.

  17. Technical note: Alternatives to reduce adipose tissue sampling bias.

    PubMed

    Cruz, G D; Wang, Y; Fadel, J G

    2014-10-01

    Understanding the mechanisms by which nutritional and pharmaceutical factors can manipulate adipose tissue growth and development in production animals has direct and indirect effects in the profitability of an enterprise. Adipocyte cellularity (number and size) is a key biological response that is commonly measured in animal science research. The variability and sampling of adipocyte cellularity within a muscle has been addressed in previous studies, but no attempt to critically investigate these issues has been proposed in the literature. The present study evaluated 2 sampling techniques (random and systematic) in an attempt to minimize sampling bias and to determine the minimum number of samples from 1 to 15 needed to represent the overall adipose tissue in the muscle. Both sampling procedures were applied on adipose tissue samples dissected from 30 longissimus muscles from cattle finished either on grass or grain. Briefly, adipose tissue samples were fixed with osmium tetroxide, and size and number of adipocytes were determined by a Coulter Counter. These results were then fit in a finite mixture model to obtain distribution parameters of each sample. To evaluate the benefits of increasing number of samples and the advantage of the new sampling technique, the concept of acceptance ratio was used; simply stated, the higher the acceptance ratio, the better the representation of the overall population. As expected, a great improvement on the estimation of the overall adipocyte cellularity parameters was observed using both sampling techniques when sample size number increased from 1 to 15 samples, considering both techniques' acceptance ratio increased from approximately 3 to 25%. When comparing sampling techniques, the systematic procedure slightly improved parameters estimation. The results suggest that more detailed research using other sampling techniques may provide better estimates for minimum sampling.

  18. Size-selective separation of polydisperse gold nanoparticles in supercritical ethane.

    PubMed

    Williams, Dylan P; Satherley, John

    2009-04-09

    The aim of this study was to use supercritical ethane to selectively disperse alkanethiol-stabilized gold nanoparticles of one size from a polydisperse sample in order to recover a monodisperse fraction of the nanoparticles. A disperse sample of metal nanoparticles with diameters in the range of 1-5 nm was prepared using established techniques then further purified by Soxhlet extraction. The purified sample was subjected to supercritical ethane at a temperature of 318 K in the pressure range 50-276 bar. Particles were characterized by UV-vis absorption spectroscopy, TEM, and MALDI-TOF mass spectroscopy. The results show that with increasing pressure the dispersibility of the nanoparticles increases, this effect is most pronounced for smaller nanoparticles. At the highest pressure investigated a sample of the particles was effectively stripped of all the smaller particles leaving a monodisperse sample. The relationship between dispersibility and supercritical fluid density for two different size samples of alkanethiol-stabilized gold nanoparticles was considered using the Chrastil chemical equilibrium model.

  19. Correlation between standard Charpy and sub-size Charpy test results of selected steels in upper shelf region

    NASA Astrophysics Data System (ADS)

    Konopík, P.; Džugan, J.; Bucki, T.; Rzepa, S.; Rund, M.; Procházka, R.

    2017-02-01

    Absorbed energy obtained from impact Charpy tests is one of the most important values in many applications, for example in residual lifetime assessment of components in service. Minimal absorbed energy is often the value crucial for extending components service life, e.g. turbines, boilers and steam lines. Using a portable electric discharge sampling equipment (EDSE), it is possible to sample experimental material non-destructively and subsequently produce mini-Charpy specimens. This paper presents a new approach in correlation from sub-size to standard Charpy test results.

  20. Poly (lactic-co-glycolic acid) particles prepared by microfluidics and conventional methods. Modulated particle size and rheology.

    PubMed

    Perez, Aurora; Hernández, Rebeca; Velasco, Diego; Voicu, Dan; Mijangos, Carmen

    2015-03-01

    Microfluidic techniques are expected to provide narrower particle size distribution than conventional methods for the preparation of poly (lactic-co-glycolic acid) (PLGA) microparticles. Besides, it is hypothesized that the particle size distribution of poly (lactic-co-glycolic acid) microparticles influences the settling behavior and rheological properties of its aqueous dispersions. For the preparation of PLGA particles, two different methods, microfluidic and conventional oil-in-water emulsification methods were employed. The particle size and particle size distribution of PLGA particles prepared by microfluidics were studied as a function of the flow rate of the organic phase while particles prepared by conventional methods were studied as a function of stirring rate. In order to study the stability and structural organization of colloidal dispersions, settling experiments and oscillatory rheological measurements were carried out on aqueous dispersions of PLGA particles with different particle size distributions. Microfluidics technique allowed the control of size and size distribution of the droplets formed in the process of emulsification. This resulted in a narrower particle size distribution for samples prepared by MF with respect to samples prepared by conventional methods. Polydisperse samples showed a larger tendency to aggregate, thus confirming the advantages of microfluidics over conventional methods, especially if biomedical applications are envisaged. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Gravity or turbulence? IV. Collapsing cores in out-of-virial disguise

    NASA Astrophysics Data System (ADS)

    Ballesteros-Paredes, Javier; Vázquez-Semadeni, Enrique; Palau, Aina; Klessen, Ralf S.

    2018-06-01

    We study the dynamical state of massive cores by using a simple analytical model, an observational sample, and numerical simulations of collapsing massive cores. From the analytical model, we find that cores increase their column density and velocity dispersion as they collapse, resulting in a time evolution path in the Larson velocity dispersion-size diagram from large sizes and small velocity dispersions to small sizes and large velocity dispersions, while they tend to equipartition between gravity and kinetic energy. From the observational sample, we find that: (a) cores with substantially different column densities in the sample do not follow a Larson-like linewidth-size relation. Instead, cores with higher column densities tend to be located in the upper-left corner of the Larson velocity dispersion σv, 3D-size R diagram, a result explained in the hierarchical and chaotic collapse scenario. (b) Cores appear to have overvirial values. Finally, our numerical simulations reproduce the behavior predicted by the analytical model and depicted in the observational sample: collapsing cores evolve towards larger velocity dispersions and smaller sizes as they collapse and increase their column density. More importantly, however, they exhibit overvirial states. This apparent excess is due to the assumption that the gravitational energy is given by the energy of an isolated homogeneous sphere. However, such excess disappears when the gravitational energy is correctly calculated from the actual spatial mass distribution. We conclude that the observed energy budget of cores is consistent with their non-thermal motions being driven by their self-gravity and in the process of dynamical collapse.

  2. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    PubMed

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  3. Fragment size distribution statistics in dynamic fragmentation of laser shock-loaded tin

    NASA Astrophysics Data System (ADS)

    He, Weihua; Xin, Jianting; Zhao, Yongqiang; Chu, Genbai; Xi, Tao; Shui, Min; Lu, Feng; Gu, Yuqiu

    2017-06-01

    This work investigates the geometric statistics method to characterize the size distribution of tin fragments produced in the laser shock-loaded dynamic fragmentation process. In the shock experiments, the ejection of the tin sample with etched V-shape groove in the free surface are collected by the soft recovery technique. Subsequently, the produced fragments are automatically detected with the fine post-shot analysis techniques including the X-ray micro-tomography and the improved watershed method. To characterize the size distributions of the fragments, a theoretical random geometric statistics model based on Poisson mixtures is derived for dynamic heterogeneous fragmentation problem, which reveals linear combinational exponential distribution. The experimental data related to fragment size distributions of the laser shock-loaded tin sample are examined with the proposed theoretical model, and its fitting performance is compared with that of other state-of-the-art fragment size distribution models. The comparison results prove that our proposed model can provide far more reasonable fitting result for the laser shock-loaded tin.

  4. Accounting for Incomplete Species Detection in Fish Community Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McManamay, Ryan A; Orth, Dr. Donald J; Jager, Yetta

    2013-01-01

    Riverine fish assemblages are heterogeneous and very difficult to characterize with a one-size-fits-all approach to sampling. Furthermore, detecting changes in fish assemblages over time requires accounting for variation in sampling designs. We present a modeling approach that permits heterogeneous sampling by accounting for site and sampling covariates (including method) in a model-based framework for estimation (versus a sampling-based framework). We snorkeled during three surveys and electrofished during a single survey in suite of delineated habitats stratified by reach types. We developed single-species occupancy models to determine covariates influencing patch occupancy and species detection probabilities whereas community occupancy models estimated speciesmore » richness in light of incomplete detections. For most species, information-theoretic criteria showed higher support for models that included patch size and reach as covariates of occupancy. In addition, models including patch size and sampling method as covariates of detection probabilities also had higher support. Detection probability estimates for snorkeling surveys were higher for larger non-benthic species whereas electrofishing was more effective at detecting smaller benthic species. The number of sites and sampling occasions required to accurately estimate occupancy varied among fish species. For rare benthic species, our results suggested that higher number of occasions, and especially the addition of electrofishing, may be required to improve detection probabilities and obtain accurate occupancy estimates. Community models suggested that richness was 41% higher than the number of species actually observed and the addition of an electrofishing survey increased estimated richness by 13%. These results can be useful to future fish assemblage monitoring efforts by informing sampling designs, such as site selection (e.g. stratifying based on patch size) and determining effort required (e.g. number of sites versus occasions).« less

  5. Effect of the three-dimensional microstructure on the sound absorption of foams: A parametric study.

    PubMed

    Chevillotte, Fabien; Perrot, Camille

    2017-08-01

    The purpose of this work is to systematically study the effect of the throat and the pore sizes on the sound absorbing properties of open-cell foams. The three-dimensional idealized unit cell used in this work enables to mimic the acoustical macro-behavior of a large class of cellular solid foams. This study is carried out for a normal incidence and also for a diffuse field excitation, with a relatively large range of sample thicknesses. The transport and sound absorbing properties are numerically studied as a function of the throat size, the pore size, and the sample thickness. The resulting diagrams show the ranges of the specific throat sizes and pore sizes where the sound absorption grading is maximized due to the pore morphology as a function of the sample thickness, and how it correlates with the corresponding transport parameters. These charts demonstrate, together with typical examples, how the morphological characteristics of foam could be modified in order to increase the visco-thermal dissipation effects.

  6. Influences of Co doping on the structural and optical properties of ZnO nanostructured

    NASA Astrophysics Data System (ADS)

    Majeed Khan, M. A.; Wasi Khan, M.; Alhoshan, Mansour; Alsalhi, M. S.; Aldwayyan, A. S.

    2010-07-01

    Pure and Co-doped ZnO nanostructured samples have been synthesized by a chemical route. We have studied the structural and optical properties of the samples by using X-ray diffraction (XRD), field-emission scanning electron microscopy (FESEM), field-emission transmission electron microscope (FETEM), energy-dispersive X-ray (EDX) analysis and UV-VIS spectroscopy. The XRD patterns show that all the samples are hexagonal wurtzite structures. Changes in crystallite size due to mechanical activation were also determined from X-ray measurements. These results were correlated with changes in particle size followed by SEM and TEM. The average crystallite sizes obtained from XRD were between 20 to 25 nm. The TEM images showed the average particle size of undoped ZnO nanostructure was about 20 nm whereas the smallest average grain size at 3% Co was about 15 nm. Optical parameters such as absorption coefficient ( α), energy band gap ( E g ), the refractive index ( n), and dielectric constants ( σ) have been determined using different methods.

  7. Experiments with central-limit properties of spatial samples from locally covariant random fields

    USGS Publications Warehouse

    Barringer, T.H.; Smith, T.E.

    1992-01-01

    When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.

  8. MUDMASTER: A Program for Calculating Crystalline Size Distributions and Strain from the Shapes of X-Ray Diffraction Peaks

    USGS Publications Warehouse

    Eberl, D.D.; Drits, V.A.; Środoń, Jan; Nüesch, R.

    1996-01-01

    Particle size may strongly influence the physical and chemical properties of a substance (e.g. its rheology, surface area, cation exchange capacity, solubility, etc.), and its measurement in rocks may yield geological information about ancient environments (sediment provenance, degree of metamorphism, degree of weathering, current directions, distance to shore, etc.). Therefore mineralogists, geologists, chemists, soil scientists, and others who deal with clay-size material would like to have a convenient method for measuring particle size distributions. Nano-size crystals generally are too fine to be measured by light microscopy. Laser scattering methods give only average particle sizes; therefore particle size can not be measured in a particular crystallographic direction. Also, the particles measured by laser techniques may be composed of several different minerals, and may be agglomerations of individual crystals. Measurement by electron and atomic force microscopy is tedious, expensive, and time consuming. It is difficult to measure more than a few hundred particles per sample by these methods. This many measurements, often taking several days of intensive effort, may yield an accurate mean size for a sample, but may be too few to determine an accurate distribution of sizes. Measurement of size distributions by X-ray diffraction (XRD) solves these shortcomings. An X-ray scan of a sample occurs automatically, taking a few minutes to a few hours. The resulting XRD peaks average diffraction effects from billions of individual nano-size crystals. The size that is measured by XRD may be related to the size of the individual crystals of the mineral in the sample, rather than to the size of particles formed from the agglomeration of these crystals. Therefore one can determine the size of a particular mineral in a mixture of minerals, and the sizes in a particular crystallographic direction of that mineral.

  9. Image analysis of representative food structures: application of the bootstrap method.

    PubMed

    Ramírez, Cristian; Germain, Juan C; Aguilera, José M

    2009-08-01

    Images (for example, photomicrographs) are routinely used as qualitative evidence of the microstructure of foods. In quantitative image analysis it is important to estimate the area (or volume) to be sampled, the field of view, and the resolution. The bootstrap method is proposed to estimate the size of the sampling area as a function of the coefficient of variation (CV(Bn)) and standard error (SE(Bn)) of the bootstrap taking sub-areas of different sizes. The bootstrap method was applied to simulated and real structures (apple tissue). For simulated structures, 10 computer-generated images were constructed containing 225 black circles (elements) and different coefficient of variation (CV(image)). For apple tissue, 8 images of apple tissue containing cellular cavities with different CV(image) were analyzed. Results confirmed that for simulated and real structures, increasing the size of the sampling area decreased the CV(Bn) and SE(Bn). Furthermore, there was a linear relationship between the CV(image) and CV(Bn) (.) For example, to obtain a CV(Bn) = 0.10 in an image with CV(image) = 0.60, a sampling area of 400 x 400 pixels (11% of whole image) was required, whereas if CV(image) = 1.46, a sampling area of 1000 x 100 pixels (69% of whole image) became necessary. This suggests that a large-size dispersion of element sizes in an image requires increasingly larger sampling areas or a larger number of images.

  10. You Cannot Step Into the Same River Twice: When Power Analyses Are Optimistic.

    PubMed

    McShane, Blakeley B; Böckenholt, Ulf

    2014-11-01

    Statistical power depends on the size of the effect of interest. However, effect sizes are rarely fixed in psychological research: Study design choices, such as the operationalization of the dependent variable or the treatment manipulation, the social context, the subject pool, or the time of day, typically cause systematic variation in the effect size. Ignoring this between-study variation, as standard power formulae do, results in assessments of power that are too optimistic. Consequently, when researchers attempting replication set sample sizes using these formulae, their studies will be underpowered and will thus fail at a greater than expected rate. We illustrate this with both hypothetical examples and data on several well-studied phenomena in psychology. We provide formulae that account for between-study variation and suggest that researchers set sample sizes with respect to our generally more conservative formulae. Our formulae generalize to settings in which there are multiple effects of interest. We also introduce an easy-to-use website that implements our approach to setting sample sizes. Finally, we conclude with recommendations for quantifying between-study variation. © The Author(s) 2014.

  11. Luminescence isochron dating: a new approach using different grain sizes.

    PubMed

    Zhao, H; Li, S H

    2002-01-01

    A new approach to isochron dating is described using different sizes of quartz and K-feldspar grains. The technique can be applied to sites with time-dependent external dose rates. It is assumed that any underestimation of the equivalent dose (De) using K-feldspar is by a factor F, which is independent of grain size (90-350 microm) for a given sample. Calibration of the beta source for different grain sizes is discussed, and then the sample ages are calculated using the differences between quartz and K-feldspar De from grains of similar size. Two aeolian sediment samples from north-eastern China are used to illustrate the application of the new method. It is confirmed that the observed values of De derived using K-feldspar underestimate the expected doses (based on the quartz De) but, nevertheless, these K-feldspar De values correlate linearly with the calculated internal dose rate contribution, supporting the assumption that the underestimation factor F is independent of grain size. The isochron ages are also compared with the results obtained using quartz De and the measured external dose rates.

  12. Estimation of Effect Size from a Series of Experiments Involving Paired Comparisons.

    ERIC Educational Resources Information Center

    Gibbons, Robert D.; And Others

    1993-01-01

    A distribution theory is derived for a G. V. Glass-type (1976) estimator of effect size from studies involving paired comparisons. The possibility of combining effect sizes from studies involving a mixture of related and unrelated samples is also explored. Resulting estimates are illustrated using data from previous psychiatric research. (SLD)

  13. Design of Phase II Non-inferiority Trials.

    PubMed

    Jung, Sin-Ho

    2017-09-01

    With the development of inexpensive treatment regimens and less invasive surgical procedures, we are confronted with non-inferiority study objectives. A non-inferiority phase III trial requires a roughly four times larger sample size than that of a similar standard superiority trial. Because of the large required sample size, we often face feasibility issues to open a non-inferiority trial. Furthermore, due to lack of phase II non-inferiority trial design methods, we do not have an opportunity to investigate the efficacy of the experimental therapy through a phase II trial. As a result, we often fail to open a non-inferiority phase III trial and a large number of non-inferiority clinical questions still remain unanswered. In this paper, we want to develop some designs for non-inferiority randomized phase II trials with feasible sample sizes. At first, we review a design method for non-inferiority phase III trials. Subsequently, we propose three different designs for non-inferiority phase II trials that can be used under different settings. Each method is demonstrated with examples. Each of the proposed design methods is shown to require a reasonable sample size for non-inferiority phase II trials. The three different non-inferiority phase II trial designs are used under different settings, but require similar sample sizes that are typical for phase II trials.

  14. Probing defects in chemically synthesized ZnO nanostrucures by positron annihilation and photoluminescence spectroscopy

    NASA Astrophysics Data System (ADS)

    Chaudhuri, S. K.; Ghosh, Manoranjan; Das, D.; Raychaudhuri, A. K.

    2010-09-01

    The present article describes the size induced changes in the structural arrangement of intrinsic defects present in chemically synthesized ZnO nanoparticles of various sizes. Routine x-ray diffraction and transmission electron microscopy have been performed to determine the shapes and sizes of the nanocrystalline ZnO samples. Detailed studies using positron annihilation spectroscopy reveals the presence of zinc vacancy. Whereas analysis of photoluminescence results predict the signature of charged oxygen vacancies. The size induced changes in positron parameters as well as the photoluminescence properties, has shown contrasting or nonmonotonous trends as size varies from 4 to 85 nm. Small spherical particles below a critical size (˜23 nm) receive more positive surface charge due to the higher occupancy of the doubly charge oxygen vacancy as compared to the bigger nanostructures where singly charged oxygen vacancy predominates. This electronic alteration has been seen to trigger yet another interesting phenomenon, described as positron confinement inside nanoparticles. Finally, based on all the results, a model of the structural arrangement of the intrinsic defects in the present samples has been reconciled.

  15. Critical size of crystalline ZrO(2) nanoparticles synthesized in near- and supercritical water and supercritical isopropyl alcohol.

    PubMed

    Becker, Jacob; Hald, Peter; Bremholm, Martin; Pedersen, Jan S; Chevallier, Jacques; Iversen, Steen B; Iversen, Bo B

    2008-05-01

    Nanocrystalline ZrO(2) samples with narrow size distributions and mean particle sizes below 10 nm have been synthesized in a continuous flow reactor in near and supercritical water as well as supercritical isopropyl alcohol using a wide range of temperatures, pressures, concentrations and precursors. The samples were comprehensively characterized by powder X-ray diffraction (PXRD), transmission electron microscopy (TEM), and small-angle X-ray scattering (SAXS), and the influence of the synthesis parameters on the particle size, particle size distribution, shape, aggregation and crystallinity was studied. On the basis of the choice of synthesis parameters either monoclinic or tetragonal zirconia phases can be obtained. The results suggest a critical particle size of 5-6 nm for nanocrystalline monoclinic ZrO(2) under the present conditions, which is smaller than estimates reported in the literature. Thus, very small monoclinic ZrO(2) particles can be obtained using a continuous flow reactor. This is an important result with respect to improvement of the catalytic properties of nanocrystalline ZrO(2).

  16. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.

    2014-04-15

    Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample sizemore » required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence.« less

  17. A multi-particle crushing apparatus for studying rock fragmentation due to repeated impacts

    NASA Astrophysics Data System (ADS)

    Huang, S.; Mohanty, B.; Xia, K.

    2017-12-01

    Rock crushing is a common process in mining and related operations. Although a number of particle crushing tests have been proposed in the literature, most of them are concerned with single-particle crushing, i.e., a single rock sample is crushed in each test. Considering the realistic scenario in crushers where many fragments are involved, a laboratory crushing apparatus is developed in this study. This device consists of a Hopkinson pressure bar system and a piston-holder system. The Hopkinson pressure bar system is used to apply calibrated dynamic loads to the piston-holder system, and the piston-holder system is used to hold rock samples and to recover fragments for subsequent particle size analysis. The rock samples are subjected to three to seven impacts under three impact velocities (2.2, 3.8, and 5.0 m/s), with the feed size of the rock particle samples limited between 9.5 and 12.7 mm. Several key parameters are determined from this test, including particle size distribution parameters, impact velocity, loading pressure, and total work. The results show that the total work correlates well with resulting fragmentation size distribution, and the apparatus provides a useful tool for studying the mechanism of crushing, which further provides guidelines for the design of commercial crushers.

  18. Tungsten Carbide Grain Size Computation for WC-Co Dissimilar Welds

    NASA Astrophysics Data System (ADS)

    Zhou, Dongran; Cui, Haichao; Xu, Peiquan; Lu, Fenggui

    2016-06-01

    A "two-step" image processing method based on electron backscatter diffraction in scanning electron microscopy was used to compute the tungsten carbide (WC) grain size distribution for tungsten inert gas (TIG) welds and laser welds. Twenty-four images were collected on randomly set fields per sample located at the top, middle, and bottom of a cross-sectional micrograph. Each field contained 500 to 1500 WC grains. The images were recognized through clustering-based image segmentation and WC grain growth recognition. According to the WC grain size computation and experiments, a simple WC-WC interaction model was developed to explain the WC dissolution, grain growth, and aggregation in welded joints. The WC-WC interaction and blunt corners were characterized using scanning and transmission electron microscopy. The WC grain size distribution and the effects of heat input E on grain size distribution for the laser samples were discussed. The results indicate that (1) the grain size distribution follows a Gaussian distribution. Grain sizes at the top of the weld were larger than those near the middle and weld root because of power attenuation. (2) Significant WC grain growth occurred during welding as observed in the as-welded micrographs. The average grain size was 11.47 μm in the TIG samples, which was much larger than that in base metal 1 (BM1 2.13 μm). The grain size distribution curves for the TIG samples revealed a broad particle size distribution without fine grains. The average grain size (1.59 μm) in laser samples was larger than that in base metal 2 (BM2 1.01 μm). (3) WC-WC interaction exhibited complex plane, edge, and blunt corner characteristics during grain growth. A WC ( { 1 {bar{{1}}}00} ) to WC ( {0 1 1 {bar{{0}}}} ) edge disappeared and became a blunt plane WC ( { 10 1 {bar{{0}}}} ) , several grains with two- or three-sided planes and edges disappeared into a multi-edge, and a WC-WC merged.

  19. Evaluation of residual uranium contamination in the dirt floor of an abandoned metal rolling mill.

    PubMed

    Glassford, Eric; Spitz, Henry; Lobaugh, Megan; Spitler, Grant; Succop, Paul; Rice, Carol

    2013-02-01

    A single, large, bulk sample of uranium-contaminated material from the dirt floor of an abandoned metal rolling mill was separated into different types and sizes of aliquots to simulate samples that would be collected during site remediation. The facility rolled approximately 11,000 tons of hot-forged ingots of uranium metal approximately 60 y ago, and it has not been used since that time. Thirty small mass (≈ 0.7 g) and 15 large mass (≈ 70 g) samples were prepared from the heterogeneously contaminated bulk material to determine how measurements of the uranium contamination vary with sample size. Aliquots of bulk material were also resuspended in an exposure chamber to produce six samples of respirable particles that were obtained using a cascade impactor. Samples of removable surface contamination were collected by wiping 100 cm of the interior surfaces of the exposure chamber with 47-mm-diameter fiber filters. Uranium contamination in each of the samples was measured directly using high-resolution gamma ray spectrometry. As expected, results for isotopic uranium (i.e., U and U) measured with the large-mass and small-mass samples are significantly different (p < 0.001), and the coefficient of variation (COV) for the small-mass samples was greater than for the large-mass samples. The uranium isotopic concentrations measured in the air and on the wipe samples were not significantly different and were also not significantly different (p > 0.05) from results for the large- or small-mass samples. Large-mass samples are more reliable for characterizing heterogeneously distributed radiological contamination than small-mass samples since they exhibit the least variation compared to the mean. Thus, samples should be sufficiently large in mass to insure that the results are truly representative of the heterogeneously distributed uranium contamination present at the facility. Monitoring exposure of workers and the public as a result of uranium contamination resuspended during site remediation should be evaluated using samples of sufficient size and type to accommodate the heterogeneous distribution of uranium in the bulk material.

  20. Measurements of Regolith Simulant Thermal Conductivity Under Asteroid and Mars Surface Conditions

    NASA Astrophysics Data System (ADS)

    Ryan, A. J.; Christensen, P. R.

    2017-12-01

    Laboratory measurements have been necessary to interpret thermal data of planetary surfaces for decades. We present a novel radiometric laboratory method to determine temperature-dependent thermal conductivity of complex regolith simulants under rough to high vacuum and across a wide range of temperatures. This method relies on radiometric temperature measurements instead of contact measurements, eliminating the need to disturb the sample with thermal probes. We intend to determine the conductivity of grains that are up to 2 cm in diameter and to parameterize the effects of angularity, sorting, layering, composition, and eventually cementation. We present the experimental data and model results for a suite of samples that were selected to isolate and address regolith physical parameters that affect bulk conductivity. Spherical glass beads of various sizes were used to measure the effect of size frequency distribution. Spherical beads of polypropylene and well-rounded quartz sand have respectively lower and higher solid phase thermal conductivities than the glass beads and thus provide the opportunity to test the sensitivity of bulk conductivity to differences in solid phase conductivity. Gas pressure in our asteroid experimental chambers is held at 10^-6 torr, which is sufficient to negate gas thermal conduction in even our coarsest of samples. On Mars, the atmospheric pressure is such that the mean free path of the gas molecules is comparable to the pore size for many regolith particulates. Thus, subtle variations in pore size and/or atmospheric pressure can produce large changes in bulk regolith conductivity. For each sample measured in our martian environmental chamber, we repeat thermal measurement runs at multiple pressures to observe this behavior. Finally, we present conductivity measurements of angular basaltic simulant that is physically analogous to sand and gravel that may be present on Bennu. This simulant was used for OSIRIS-REx TAGSAM Sample Return Arm engineering tests. We measure the original size frequency distribution as well as several sorted size fractions. These results will support the efforts of the OSIRIS-REx team in selecting a site on asteroid Bennu that is safe for the spacecraft and meets grain size requirements for sampling.

  1. Sediment Grain-Size and Loss-on-Ignition Analyses from 2002 Englebright Lake Coring and Sampling Campaigns

    USGS Publications Warehouse

    Snyder, Noah P.; Allen, James R.; Dare, Carlin; Hampton, Margaret A.; Schneider, Gary; Wooley, Ryan J.; Alpers, Charles N.; Marvin-DiPasquale, Mark C.

    2004-01-01

    This report presents sedimentologic data from three 2002 sampling campaigns conducted in Englebright Lake on the Yuba River in northern California. This work was done to assess the properties of the material deposited in the reservoir between completion of Englebright Dam in 1940 and 2002, as part of the Upper Yuba River Studies Program. Included are the results of grain-size-distribution and loss-on-ignition analyses for 561 samples, as well as an error analysis based on replicate pairs of subsamples.

  2. Inertial impaction air sampling device

    DOEpatents

    Dewhurst, K.H.

    1990-05-22

    An inertial impactor is designed which is to be used in an air sampling device for collection of respirable size particles in ambient air. The device may include a graphite furnace as the impaction substrate in a small-size, portable, direct analysis structure that gives immediate results and is totally self-contained allowing for remote and/or personal sampling. The graphite furnace collects suspended particles transported through the housing by means of the air flow system, and these particles may be analyzed for elements, quantitatively and qualitatively, by atomic absorption spectrophotometry. 3 figs.

  3. Rectification of depth measurement using pulsed thermography with logarithmic peak second derivative method

    NASA Astrophysics Data System (ADS)

    Li, Xiaoli; Zeng, Zhi; Shen, Jingling; Zhang, Cunlin; Zhao, Yuejin

    2018-03-01

    Logarithmic peak second derivative (LPSD) method is the most popular method for depth prediction in pulsed thermography. It is widely accepted that this method is independent of defect size. The theoretical model for LPSD method is based on the one-dimensional solution of heat conduction without considering the effect of defect size. When a decay term considering defect aspect ratio is introduced into the solution to correct the three-dimensional thermal diffusion effect, we found that LPSD method is affected by defect size by analytical model. Furthermore, we constructed the relation between the characteristic time of LPSD method and defect aspect ratio, which was verified with the experimental results of stainless steel and glass fiber reinforced plate (GFRP) samples. We also proposed an improved LPSD method for depth prediction when the effect of defect size was considered, and the rectification results of stainless steel and GFRP samples were presented and discussed.

  4. Influence of grain size on the mechanical properties of nano-crystalline copper; insights from molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Rida, A.; Makke, A.; Rouhaud, E.; Micoulaut, M.

    2017-10-01

    We use molecular dynamics simulations to study the mechanical properties of a columnar nanocrystalline copper with a mean grain size between 8.91 nm and 24 nm. The used samples were generated by using a melting cooling method. These samples were submitted to uniaxial tensile test. The results reveal the presence of a critical mean grain size between 16 and 20 nm, where there is an inversion in the conventional Hall-Petch tendency. This inversion is illustrated by the increase of flow stress with the increase of the mean grain size. This transition is caused by shifting of the deformation mechanism from dislocations to a combination of grain boundaries sliding and dislocations. Moreover, the effect of temperature on the mechanical properties of nanocrystalline copper has been investigated. The results show a decrease of the flow stress and Young's modulus when the temperature increases.

  5. Sampling artifacts in perspective and stereo displays

    NASA Astrophysics Data System (ADS)

    Pfautz, Jonathan D.

    2001-06-01

    The addition of stereo cues to perspective displays is generally expected to improve the perception of depth. However, the display's pixel array samples both perspective and stereo depth cues, introducing inaccuracies and inconsistencies into the representation of an object's depth. The position, size and disparity of an object will be inaccurately presented and size and disparity will be inconsistently presented across depth. These inconsistencies can cause the left and right edges of an object to appear at different stereo depths. This paper describes how these inconsistencies result in conflicts between stereo and perspective depth information. A relative depth judgement task was used to explore these conflicts. Subjects viewed two objects and reported which appeared closer. Three conflicts resulting from inconsistencies caused by sampling were examined: (1) Perspective size and location versus stereo disparity. (2) Perspective size versus perspective location and stereo disparity. (3) Left and right edge disparity versus perspective size and location. In the first two cases, subjects achieved near-perfect accuracy when perspective and disparity cues were complementary. When size and disparity were inconsistent and thus in conflict, stereo dominated perspective. Inconsistency between the disparities of the horizontal edges of an object confused the subjects, even when complementary perspective and stereo information was provided. Since stereo was the dominant cue and was ambiguous across the object, this led to significantly reduced accuracy. Edge inconsistencies also led to more complaints about visual fatigue and discomfort.

  6. Optimization of scat detection methods for a social ungulate, the wild pig, and experimental evaluation of factors affecting detection of scat

    USGS Publications Warehouse

    Keiter, David A.; Cunningham, Fred L.; Rhodes, Olin E.; Irwin, Brian J.; Beasley, James

    2016-01-01

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.

  7. Optimization of scat detection methods for a social ungulate, the wild pig, and experimental evaluation of factors affecting detection of scat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less

  8. Optimization of Scat Detection Methods for a Social Ungulate, the Wild Pig, and Experimental Evaluation of Factors Affecting Detection of Scat.

    PubMed

    Keiter, David A; Cunningham, Fred L; Rhodes, Olin E; Irwin, Brian J; Beasley, James C

    2016-01-01

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.

  9. Optimization of scat detection methods for a social ungulate, the wild pig, and experimental evaluation of factors affecting detection of scat

    DOE PAGES

    Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.; ...

    2016-05-25

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less

  10. Sample Size Estimation: The Easy Way

    ERIC Educational Resources Information Center

    Weller, Susan C.

    2015-01-01

    This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…

  11. The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education

    ERIC Educational Resources Information Center

    Slavin, Robert; Smith, Dewi

    2009-01-01

    Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…

  12. Sampling methods, dispersion patterns, and fixed precision sequential sampling plans for western flower thrips (Thysanoptera: Thripidae) and cotton fleahoppers (Hemiptera: Miridae) in cotton.

    PubMed

    Parajulee, M N; Shrestha, R B; Leser, J F

    2006-04-01

    A 2-yr field study was conducted to examine the effectiveness of two sampling methods (visual and plant washing techniques) for western flower thrips, Frankliniella occidentalis (Pergande), and five sampling methods (visual, beat bucket, drop cloth, sweep net, and vacuum) for cotton fleahopper, Pseudatomoscelis seriatus (Reuter), in Texas cotton, Gossypium hirsutum (L.), and to develop sequential sampling plans for each pest. The plant washing technique gave similar results to the visual method in detecting adult thrips, but the washing technique detected significantly higher number of thrips larvae compared with the visual sampling. Visual sampling detected the highest number of fleahoppers followed by beat bucket, drop cloth, vacuum, and sweep net sampling, with no significant difference in catch efficiency between vacuum and sweep net methods. However, based on fixed precision cost reliability, the sweep net sampling was the most cost-effective method followed by vacuum, beat bucket, drop cloth, and visual sampling. Taylor's Power Law analysis revealed that the field dispersion patterns of both thrips and fleahoppers were aggregated throughout the crop growing season. For thrips management decision based on visual sampling (0.25 precision), 15 plants were estimated to be the minimum sample size when the estimated population density was one thrips per plant, whereas the minimum sample size was nine plants when thrips density approached 10 thrips per plant. The minimum visual sample size for cotton fleahoppers was 16 plants when the density was one fleahopper per plant, but the sample size decreased rapidly with an increase in fleahopper density, requiring only four plants to be sampled when the density was 10 fleahoppers per plant. Sequential sampling plans were developed and validated with independent data for both thrips and cotton fleahoppers.

  13. Statistical aspects of genetic association testing in small samples, based on selective DNA pooling data in the arctic fox.

    PubMed

    Szyda, Joanna; Liu, Zengting; Zatoń-Dobrowolska, Magdalena; Wierzbicki, Heliodor; Rzasa, Anna

    2008-01-01

    We analysed data from a selective DNA pooling experiment with 130 individuals of the arctic fox (Alopex lagopus), which originated from 2 different types regarding body size. The association between alleles of 6 selected unlinked molecular markers and body size was tested by using univariate and multinomial logistic regression models, applying odds ratio and test statistics from the power divergence family. Due to the small sample size and the resulting sparseness of the data table, in hypothesis testing we could not rely on the asymptotic distributions of the tests. Instead, we tried to account for data sparseness by (i) modifying confidence intervals of odds ratio; (ii) using a normal approximation of the asymptotic distribution of the power divergence tests with different approaches for calculating moments of the statistics; and (iii) assessing P values empirically, based on bootstrap samples. As a result, a significant association was observed for 3 markers. Furthermore, we used simulations to assess the validity of the normal approximation of the asymptotic distribution of the test statistics under the conditions of small and sparse samples.

  14. Model selection with multiple regression on distance matrices leads to incorrect inferences.

    PubMed

    Franckowiak, Ryan P; Panasci, Michael; Jarvis, Karl J; Acuña-Rodriguez, Ian S; Landguth, Erin L; Fortin, Marie-Josée; Wagner, Helene H

    2017-01-01

    In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM) to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC), its small-sample correction (AICc), and the Bayesian information criterion (BIC) to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.

  15. Measuring the specific surface area of natural and manmade glasses: effects of formation process, morphology, and particle size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papelis, Charalambos; Um, Wooyong; Russel, Charles E.

    2003-03-28

    The specific surface area of natural and manmade solid materials is a key parameter controlling important interfacial processes in natural environments and engineered systems, including dissolution reactions and sorption processes at solid-fluid interfaces. To improve our ability to quantify the release of trace elements trapped in natural glasses, the release of hazardous compounds trapped in manmade glasses, or the release of radionuclides from nuclear melt glass, we measured the specific surface area of natural and manmade glasses as a function of particle size, morphology, and composition. Volcanic ash, volcanic tuff, tektites, obsidian glass, and in situ vitrified rock were analyzed.more » Specific surface area estimates were obtained using krypton as gas adsorbent and the BET model. The range of surface areas measured exceeded three orders of magnitude. A tektite sample had the highest surface area (1.65 m2/g), while one of the samples of in situ vitrified rock had the lowest surf ace area (0.0016 m2/g). The specific surface area of the samples was a function of particle size, decreasing with increasing particle size. Different types of materials, however, showed variable dependence on particle size, and could be assigned to one of three distinct groups: (1) samples with low surface area dependence on particle size and surface areas approximately two orders of magnitude higher than the surface area of smooth spheres of equivalent size. The specific surface area of these materials was attributed mostly to internal porosity and surface roughness. (2) samples that showed a trend of decreasing surface area dependence on particle size as the particle size increased. The minimum specific surface area of these materials was between 0.1 and 0.01 m2/g and was also attributed to internal porosity and surface roughness. (3) samples whose surface area showed a monotonic decrease with increasing particle size, never reaching an ultimate surface area limit within the particle size range examined. The surface area results were consistent with particle morphology, examined by scanning electron microscopy, and have significant implications for the release of radionuclides and toxic metals in the environment.« less

  16. Phylogenetic effective sample size.

    PubMed

    Bartoszek, Krzysztof

    2016-10-21

    In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Stability and bias of classification rates in biological applications of discriminant analysis

    USGS Publications Warehouse

    Williams, B.K.; Titus, K.; Hines, J.E.

    1990-01-01

    We assessed the sampling stability of classification rates in discriminant analysis by using a factorial design with factors for multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. Simulation results indicated strong bias in correct classification rates when group sample sizes were small and when overlap among groups was high. We also found that stability of the correct classification rates was influenced by these factors, indicating that the number of samples required for a given level of precision increases with the amount of overlap among groups. In a review of 60 published studies, we found that 57% of the articles presented results on classification rates, though few of them mentioned potential biases in their results. Wildlife researchers should choose the total number of samples per group to be at least 2 times the number of variables to be measured when overlap among groups is low. Substantially more samples are required as the overlap among groups increases

  18. A comparative study of the physical properties of Cu-Zn ferrites annealed under different atmospheres and temperatures: Magnetic enhancement of Cu0.5Zn0.5Fe2O4 nanoparticles by a reducing atmosphere

    NASA Astrophysics Data System (ADS)

    Gholizadeh, Ahmad

    2018-04-01

    In the present work, the influence of different sintering atmospheres and temperatures on physical properties of the Cu0.5Zn0.5Fe2O4 nanoparticles including the redistribution of Zn2+ and Fe3+ ions, the oxidation of Fe atoms in the lattice, crystallite sizes, IR bands, saturation magnetization and magnetic core sizes have been investigated. The fitting of XRD patterns by using Fullprof program and also FT-IR measurement show the formation of a cubic structure with no presence of impurity phase for all the samples. The unit cell parameter of the samples sintered at the air- and inert-ambient atmospheres trend to decrease with sintering temperature, but for the samples sintered under carbon monoxide-ambient atmosphere increase. The magnetization curves versus the applied magnetic field, indicate different behaviour for the samples sintered at 700 °C with the respect to the samples sintered at 300 °C. Also, the saturation magnetization increases with the sintering temperature and reach a maximum 61.68 emu/g in the sample sintered under reducing atmosphere at 600 °C. The magnetic particle size distributions of samples have been calculated by fitting the M-H curves with the size distributed Langevin function. The results obtained from the XRD and FTIR measurements suggest that the magnetic core size has the dominant effect in variation of the saturation magnetization of the samples.

  19. Sampling design and required sample size for evaluating contamination levels of 137Cs in Japanese fir needles in a mixed deciduous forest stand in Fukushima, Japan.

    PubMed

    Oba, Yurika; Yamada, Toshihiro

    2017-05-01

    We estimated the sample size (the number of samples) required to evaluate the concentration of radiocesium ( 137 Cs) in Japanese fir (Abies firma Sieb. & Zucc.), 5 years after the outbreak of the Fukushima Daiichi Nuclear Power Plant accident. We investigated the spatial structure of the contamination levels in this species growing in a mixed deciduous broadleaf and evergreen coniferous forest stand. We sampled 40 saplings with a tree height of 150 cm-250 cm in a Fukushima forest community. The results showed that: (1) there was no correlation between the 137 Cs concentration in needles and soil, and (2) the difference in the spatial distribution pattern of 137 Cs concentration between needles and soil suggest that the contribution of root uptake to 137 Cs in new needles of this species may be minor in the 5 years after the radionuclides were released into the atmosphere. The concentration of 137 Cs in needles showed a strong positive spatial autocorrelation in the distance class from 0 to 2.5 m, suggesting that the statistical analysis of data should consider spatial autocorrelation in the case of an assessment of the radioactive contamination of forest trees. According to our sample size analysis, a sample size of seven trees was required to determine the mean contamination level within an error in the means of no more than 10%. This required sample size may be feasible for most sites. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. The endothelial sample size analysis in corneal specular microscopy clinical examinations.

    PubMed

    Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci

    2012-05-01

    To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.

  1. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications

    PubMed Central

    Chaibub Neto, Elias

    2015-01-01

    In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson’s sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling. PMID:26125965

  2. Accounting for twin births in sample size calculations for randomised trials.

    PubMed

    Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J

    2018-05-04

    Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.

  3. Water quality monitoring: A comparative case study of municipal and Curtin Sarawak's lake samples

    NASA Astrophysics Data System (ADS)

    Anand Kumar, A.; Jaison, J.; Prabakaran, K.; Nagarajan, R.; Chan, Y. S.

    2016-03-01

    In this study, particle size distribution and zeta potential of the suspended particles in municipal water and lake surface water of Curtin Sarawak's lake were compared and the samples were analysed using dynamic light scattering method. High concentration of suspended particles affects the water quality as well as suppresses the aquatic photosynthetic systems. A new approach has been carried out in the current work to determine the particle size distribution and zeta potential of the suspended particles present in the water samples. The results for the lake samples showed that the particle size ranges from 180nm to 1345nm and the zeta potential values ranges from -8.58 mV to -26.1 mV. High zeta potential value was observed in the surface water samples of Curtin Sarawak's lake compared to the municipal water. The zeta potential values represent that the suspended particles are stable and chances of agglomeration is lower in lake water samples. Moreover, the effects of physico-chemical parameters on zeta potential of the water samples were also discussed.

  4. Electrical conductivity and magnetic field dependent current-voltage characteristics of nanocrystalline nickel ferrite

    NASA Astrophysics Data System (ADS)

    Ghosh, P.; Bhowmik, R. N.; Das, M. R.; Mitra, P.

    2017-04-01

    We have studied the grain size dependent electrical conductivity, dielectric relaxation and magnetic field dependent current voltage (I - V) characteristics of nickel ferrite (NiFe2O4) . The material has been synthesized by sol-gel self-combustion technique, followed by ball milling at room temperature in air environment to control the grain size. The material has been characterized using X-ray diffraction (refined with MAUD software analysis) and Transmission electron microscopy. Impedance spectroscopy and I - V characteristics in the presence of variable magnetic fields have confirmed the increase of resistivity for the fine powdered samples (grain size 5.17±0.6 nm), resulted from ball milling of the chemical routed sample. Activation energy of the material for electrical charge hopping process has increased with the decrease of grain size by mechanical milling of chemical routed sample. The I - V curves showed many highly non-linear and irreversible electrical features, e.g., I - V loop and bi-stable electronic states (low resistance state-LRS and high resistance state-HRS) on cycling the electrical bias voltage direction during I-V curve measurement. The electrical dc resistance for the chemically routed (without milled) sample in HRS (∼3.4876×104 Ω) at 20 V in presence of magnetic field 10 kOe has enhanced to ∼3.4152×105 Ω for the 10 h milled sample. The samples exhibited an unusual negative differential resistance (NDR) effect that gradually decreased on decreasing the grain size of the material. The magneto-resistance of the samples at room temperature has been found substantially large (∼25-65%). The control of electrical charge transport properties under magnetic field, as observed in the present ferrimagnetic material, indicate the magneto-electric coupling in the materials and the results could be useful in spintronics applications.

  5. Analyses of sweep-up, ejecta, and fallback material from the 4250 metric ton high explosive test ''MISTY PICTURE'

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wohletz, K.H.; Raymond, R. Jr.; Rawson, G.

    1988-01-01

    The MISTY PICTURE surface burst was detonated at the White Sands Missle range in May of 1987. The Los Alamos National Laboratory dust characterization program was expanded to help correlate and interrelate aspects of the overall MISTY PICTURE dust and ejecta characterization program. Pre-shot sampling of the test bed included composite samples from 15 to 75 m distance from Surface Ground Zero (SGZ) representing depths down to 2.5 m, interval samples from 15 to 25 m from SGZ representing depths down to 3m, and samples of surface material (top 0.5 cm) out to distances of 190 m from SGZ. Sweep-upmore » samples were collected in GREG/SNOB gages located within the DPR. All samples were dry-sieved between 8.0 mm and 0.045 mm (16 size fractures); selected samples were analyzed for fines by a contrifugal settling technique. The size distributions were analyzed using spectral decomposition based upon a sequential fragmentation model. Results suggest that the same particle size subpopulations are present in the ejecta, fallout, and sweep-up samples as are present in the pre-shot test bed. The particle size distribution in post-shot environments apparently can be modelled taking into account heterogeneities in the pre-shot test bed and dominant wind direction during and following the shot. 13 refs., 12 figs., 2 tabs.« less

  6. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    PubMed

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Malaria prevalence metrics in low- and middle-income countries: an assessment of precision in nationally-representative surveys.

    PubMed

    Alegana, Victor A; Wright, Jim; Bosco, Claudio; Okiro, Emelda A; Atkinson, Peter M; Snow, Robert W; Tatem, Andrew J; Noor, Abdisalan M

    2017-11-21

    One pillar to monitoring progress towards the Sustainable Development Goals is the investment in high quality data to strengthen the scientific basis for decision-making. At present, nationally-representative surveys are the main source of data for establishing a scientific evidence base, monitoring, and evaluation of health metrics. However, little is known about the optimal precisions of various population-level health and development indicators that remains unquantified in nationally-representative household surveys. Here, a retrospective analysis of the precision of prevalence from these surveys was conducted. Using malaria indicators, data were assembled in nine sub-Saharan African countries with at least two nationally-representative surveys. A Bayesian statistical model was used to estimate between- and within-cluster variability for fever and malaria prevalence, and insecticide-treated bed nets (ITNs) use in children under the age of 5 years. The intra-class correlation coefficient was estimated along with the optimal sample size for each indicator with associated uncertainty. Results suggest that the estimated sample sizes for the current nationally-representative surveys increases with declining malaria prevalence. Comparison between the actual sample size and the modelled estimate showed a requirement to increase the sample size for parasite prevalence by up to 77.7% (95% Bayesian credible intervals 74.7-79.4) for the 2015 Kenya MIS (estimated sample size of children 0-4 years 7218 [7099-7288]), and 54.1% [50.1-56.5] for the 2014-2015 Rwanda DHS (12,220 [11,950-12,410]). This study highlights the importance of defining indicator-relevant sample sizes to achieve the required precision in the current national surveys. While expanding the current surveys would need additional investment, the study highlights the need for improved approaches to cost effective sampling.

  8. Transport Loss Estimation of Fine Particulate Matter in Sampling Tube Based on Numerical Computation

    NASA Astrophysics Data System (ADS)

    Luo, L.; Cheng, Z.

    2016-12-01

    In-situ measurement of PM2.5 physical and chemical properties is one substantial approach for the mechanism investigation of PM2.5 pollution. Minimizing PM2.5 transport loss in sampling tube is essential for ensuring the accuracy of the measurement result. In order to estimate the integrated PM2.5 transport efficiency in sampling tube and optimize tube designs, the effects of different tube factors (length, bore size and bend number) on the PM2.5 transport were analyzed based on the numerical computation. The results shows that PM2.5 mass concentration transport efficiency of vertical tube with flowrate at 20.0 L·min-1, bore size at 4 mm, length at 1.0 m was 89.6%. However, the transport efficiency will increase to 98.3% when the bore size is increased to 14 mm. PM2.5 mass concentration transport efficiency of horizontal tube with flowrate at 1.0 L·min-1, bore size at 4mm, length at 10.0 m is 86.7%, increased to 99.2% with length at 0.5 m. Low transport efficiency of 85.2% for PM2.5 mass concentration is estimated in bend with flowrate at 20.0 L·min-1, bore size at 4mm, curvature angle at 90o. Laminar flow of air in tube through keeping the ratio of flowrate (L·min-1) and bore size (mm) less than 1.4 is beneficial to decrease the PM2.5 transport loss. For the target of PM2.5 transport efficiency higher than 97%, it is advised to use vertical sampling tubes with length less than 6.0 m for the flowrates of 2.5, 5.0, 10.0 L·min-1 and bore size larger than 12 mm for the flowrates of 16.7 or 20.0 L·min-1. For horizontal sampling tubes, tube length is decided by the ratio of flowrate and bore size. Meanwhile, it is suggested to decrease the amount of the bends in tube of turbulent flow.

  9. Public Opinion Polls, Chicken Soup and Sample Size

    ERIC Educational Resources Information Center

    Nguyen, Phung

    2005-01-01

    Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.

  10. Angiographic core laboratory reproducibility analyses: implications for planning clinical trials using coronary angiography and left ventriculography end-points.

    PubMed

    Steigen, Terje K; Claudio, Cheryl; Abbott, David; Schulzer, Michael; Burton, Jeff; Tymchak, Wayne; Buller, Christopher E; John Mancini, G B

    2008-06-01

    To assess reproducibility of core laboratory performance and impact on sample size calculations. Little information exists about overall reproducibility of core laboratories in contradistinction to performance of individual technicians. Also, qualitative parameters are being adjudicated increasingly as either primary or secondary end-points. The comparative impact of using diverse indexes on sample sizes has not been previously reported. We compared initial and repeat assessments of five quantitative parameters [e.g., minimum lumen diameter (MLD), ejection fraction (EF), etc.] and six qualitative parameters [e.g., TIMI myocardial perfusion grade (TMPG) or thrombus grade (TTG), etc.], as performed by differing technicians and separated by a year or more. Sample sizes were calculated from these results. TMPG and TTG were also adjudicated by a second core laboratory. MLD and EF were the most reproducible, yielding the smallest sample size calculations, whereas percent diameter stenosis and centerline wall motion require substantially larger trials. Of the qualitative parameters, all except TIMI flow grade gave reproducibility characteristics yielding sample sizes of many 100's of patients. Reproducibility of TMPG and TTG was only moderately good both within and between core laboratories, underscoring an intrinsic difficulty in assessing these. Core laboratories can be shown to provide reproducibility performance that is comparable to performance commonly ascribed to individual technicians. The differences in reproducibility yield huge differences in sample size when comparing quantitative and qualitative parameters. TMPG and TTG are intrinsically difficult to assess and conclusions based on these parameters should arise only from very large trials.

  11. Temporal change in the size distribution of airborne Radiocesium derived from the Fukushima accident

    NASA Astrophysics Data System (ADS)

    Kaneyasu, Naoki; Ohashi, Hideo; Suzuki, Fumie; Okuda, Tomoaki; Ikemori, Fumikazu; Akata, Naofumi

    2013-04-01

    The accident of Fukushima Dai-ichi nuclear power plant discharged a large amount of radioactive materials into the environment. After 40 days of the accident, we started to collect the size-segregated aerosol at Tsukuba City, Japan, located 170 km south of the plant, by use of a low-pressure cascade impactor. The sampling continued from April 28, through October 26, 2011. The number of sample sets collected in total was 8. The radioactivity of 134Cs and 137Cs in aerosols collected at each stage were determined by gamma-ray with a high sensitivity Germanic detector. After the gamma-ray spectrometry analysis, the chemical species in the aerosols were analyzed. The analyses of first (April 28-May 12) and second (May 12-26) samples showed that the activity size distributions of 134Cs and 137Cs in aerosols reside mostly in the accumulation mode size range. These activity size distributions almost overlapped with the mass size distribution of non-sea-salt sulfate aerosol. From the results, we regarded that sulfate is the main transport medium of these radionuclides, and re-suspended soil particles that attached radionuclides were not the major airborne radioactive substances by the end of May, 2011 (Kaneyasu et al., 2012). We further conducted the successive extraction experiment of radiocesium from the aerosol deposits on the aluminum sheet substrate (8th stage of the first aerosol sample, 0.5-0.7 μm in aerodynamic diameter) with water and 0.1M HCl. In contrast to the relatively insoluble property of Chernobyl radionuclides, those in aerosols collected at Tsukuba in fine mode are completely water-soluble (100%). From the third aerosol sample, the activity size distributions started to change, i.e., the major peak in the accumulation mode size range seen in the first and second aerosol samples became smaller and an additional peak appeared in the coarse mode size range. The comparison of the activity size distributions of radiocesium and the mass size distributions of major aerosol components collected by the end of August, 2011, (i.e., sample No.5) and its implication will be discussed in the presentation. Reference Kaneyasu et al., Environ. Sci. Technol. 46, 5720-5726 (2012).

  12. Measuring solids concentration in stormwater runoff: comparison of analytical methods.

    PubMed

    Clark, Shirley E; Siu, Christina Y S

    2008-01-15

    Stormwater suspended solids typically are quantified using one of two methods: aliquot/subsample analysis (total suspended solids [TSS]) or whole-sample analysis (suspended solids concentration [SSC]). Interproject comparisons are difficult because of inconsistencies in the methods and in their application. To address this concern, the suspended solids content has been measured using both methodologies in many current projects, but the question remains about how to compare these values with historical water-quality data where the analytical methodology is unknown. This research was undertaken to determine the effect of analytical methodology on the relationship between these two methods of determination of the suspended solids concentration, including the effect of aliquot selection/collection method and of particle size distribution (PSD). The results showed that SSC was best able to represent the known sample concentration and that the results were independent of the sample's PSD. Correlations between the results and the known sample concentration could be established for TSS samples, but they were highly dependent on the sample's PSD and on the aliquot collection technique. These results emphasize the need to report not only the analytical method but also the particle size information on the solids in stormwater runoff.

  13. Green Microwave-Assisted Combustion Synthesis of Zinc Oxide Nanoparticles with Citrullus colocynthis (L.) Schrad: Characterization and Biomedical Applications.

    PubMed

    Azizi, Susan; Mohamad, Rosfarizan; Mahdavi Shahri, Mahnaz

    2017-02-16

    In this paper, a green microwave-assisted combustion approach to synthesize ZnO-NPs using zinc nitrate and Citrullus colocynthis (L.) Schrad (fruit, seed and pulp) extracts as bio-fuels is reported. The structure, optical, and colloidal properties of the synthesized ZnO-NP samples were studied. Results illustrate that the morphology and particle size of the ZnO samples are different and depend on the bio-fuel. The XRD results revealed that hexagonal wurtzite ZnO-NPs with mean particle size of 27-85 nm were produced by different bio-fuels. The optical band gap was increased from 3.25 to 3.40 eV with the decreasing of particle size. FTIR results showed some differences in the surface structures of the as-synthesized ZnO-NP samples. This led to differences in the zeta potential, hydrodynamic size, and more significantly, antioxidant activity through scavenging of 1, 1-Diphenyl-2-picrylhydrazyl (DPPH) free radicals. In in vitro cytotoxicity studies on 3T3 cells, a dose dependent toxicity with non-toxic effect of concentration below 0.26 mg/mL was shown for ZnO-NP samples. Furthermore, the as-synthesized ZnO-NPs inhibited the growth of medically significant pathogenic gram-positive ( Bacillus subtilis and Methicillin-resistant Staphylococcus aurous ) and gram-negative ( Peseudomonas aeruginosa and Escherichia coli ) bacteria. This study provides a simple, green and efficient approach to produce ZnO nanoparticles for various applications.

  14. Measurement of particle size distribution of soil and selected aggregate sizes using the hydrometer method and laser diffractometry

    NASA Astrophysics Data System (ADS)

    Guzmán, G.; Gómez, J. A.; Giráldez, J. V.

    2010-05-01

    Soil particle size distribution has been traditionally determined by the hydrometer or the sieve-pipette methods, both of them time consuming and requiring a relatively large soil sample. This might be a limitation in situations, such as for instance analysis of suspended sediment, when the sample is small. A possible alternative to these methods are the optical techniques such as laser diffractometry. However the literature indicates that the use of this technique as an alternative to traditional methods is still limited, because the difficulty in replicating the results obtained with the standard methods. In this study we present the percentages of soil grain size determined using laser diffractometry within ranges set between 0.04 - 2000 μm. A Beckman-Coulter ® LS-230 with a 750 nm laser beam and software version 3.2 in five soils, representative of southern Spain: Alameda, Benacazón, Conchuela, Lanjarón and Pedrera. In three of the studied soils (Alameda, Benacazón and Conchuela) the particle size distribution of each aggregate size class was also determined. Aggregate size classes were obtained by dry sieve analysis using a Retsch AS 200 basic ®. Two hundred grams of air dried soil were sieved during 150 s, at amplitude 2 mm, getting nine different sizes between 2000 μm and 10 μm. Analyses were performed by triplicate. The soil sample preparation was also adapted to our conditions. A small amount each soil sample (less than 1 g) was transferred to the fluid module full of running water and disaggregated by ultrasonication at energy level 4 and 80 ml of sodium hexametaphosphate solution during 580 seconds. Two replicates of each sample were performed. Each measurement was made for a 90 second reading at a pump speed of 62. After the laser diffractometry analysis, each soil and its aggregate classes were processed calibrating its own optical model fitting the optical parameters that mainly depends on the color and the shape of the analyzed particle. As a second alternative a unique optical model valid for a broad range of soils developed by the Department of Soil, Water, and Environmental Science of the University of Arizona (personal communication, already submitted) was tested. The results were compared with the particle size distribution measured in the same soils and aggregate classes using the hydrometer method. Preliminary results indicate a better calibration of the technique using the optical model of the Department of Soil, Water, and Environmental Science of the University of Arizona, which obtained a good correlations (r2>0.85). This result suggests that with an appropriate calibration of the optical model laser diffractometry might provide a reliable soil particle characterization.

  15. Terrestrial in situ sampling of dust devils (relative particle loads and vertical grain size distributions) as an equivalent for martian dust devils.

    NASA Astrophysics Data System (ADS)

    Raack, J.; Dennis, R.; Balme, M. R.; Taj-Eddine, K.; Ori, G. G.

    2017-12-01

    Dust devils are small vertical convective vortices which occur on Earth and Mars [1] but their internal structure is almost unknown. Here we report on in situ samples of two active dust devils in the Sahara Desert in southern Morocco [2]. For the sampling we used a 4 m high aluminium pipe with sampling areas made of removable adhesive tape. We took samples between 0.1-4 m with a sampling interval of 0.5 m and between 0.5-2 m with an interval of 0.25 m, respectively. The maximum diameter of all particles of the different sampling heights were then measured using an optical microscope to gain vertical grain size distributions and relative particle loads. Our measurements imply that both dust devils have a general comparable internal structure despite their different strengths and dimensions which indicates that the dust devils probably represents the surficial grain size distribution they move over. The particle sizes within the dust devils decrease nearly exponential with height which is comparable to results by [3]. Furthermore, our results show that about 80-90 % of the total particle load were lifted only within the first meter, which is a direct evidence for the existence of a sand skirt. If we assume that grains with a diameter <31 μm can go into suspension [4], our results show that only less than 0.1 wt% can be entrained into the atmosphere. Although this amount seems very low, these values represent between 60 and 70 % of all lifted particles due to the small grain sizes and their low weight. On Mars, the amount of lifted particles will be general higher as the dust coverage is larger [5], although the atmosphere can only suspend smaller grain sizes ( <20 μm) [6] compared to Earth. During our field campaign we observed numerous larger dust devils each day which were up to several hundred meters tall and had diameters of several tens of meters. This implies a much higher input of fine grained material into the atmosphere (which will have an influence on the climate, weather, and human health [7]) compared to the relative small dust devils sampled during our field campaign. [1] Thomas and Gierasch (1985) Science 230 [2] Raack et al. (2017) Astrobiology [3] Oke et al. (2007) J. Arid Environ. 71 [4] Balme and Greeley (2006) Rev. Geophys. 44 [5] Christensen (1986) JGR 91 [6] Newman et al. (2002) JGR 107 [7] Gillette and Sinclair (1990) Atmos. Environ. 24

  16. High impact of in situ dextran coating on biocompatibility, stability and magnetic properties of iron oxide nanoparticles.

    PubMed

    Shaterabadi, Zhila; Nabiyouni, Gholamreza; Soleymani, Meysam

    2017-06-01

    Biocompatible ferrofluids based on dextran coated iron oxide nanoparticles were fabricated by conventional co-precipitation method. The experimental results show that the presence of dextran in reaction medium not only causes to the appearance of superparamagnetic behavior but also results in significant suppression in saturation magnetization of dextran coated samples. These results can be attributed to size reduction originated from the role of dextran as a surfactant. Moreover, weight ratio of dextran to magnetic nanoparticles has a remarkable influence on size and magnetic properties of nanoparticles, so that the sample prepared with a higher weight ratio of dextran to nanoparticles has the smaller size and saturation magnetization compare with the other samples. In addition, the ferrofluids containing such nanoparticles have an excellent stability at physiological pH for several months. Furthermore, the biocompatibility studies reveal that surface modification of nanoparticles by dextran dramatically decreases the cytotoxicity of bare nanoparticles and consequently improves their potential application for diagnostic and therapeutic purposes. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Extracting samples of high diversity from thematic collections of large gene banks using a genetic-distance based approach

    PubMed Central

    2010-01-01

    Background Breeding programs are usually reluctant to evaluate and use germplasm accessions other than the elite materials belonging to their advanced populations. The concept of core collections has been proposed to facilitate the access of potential users to samples of small sizes, representative of the genetic variability contained within the gene pool of a specific crop. The eventual large size of a core collection perpetuates the problem it was originally proposed to solve. The present study suggests that, in addition to the classic core collection concept, thematic core collections should be also developed for a specific crop, composed of a limited number of accessions, with a manageable size. Results The thematic core collection obtained meets the minimum requirements for a core sample - maintenance of at least 80% of the allelic richness of the thematic collection, with, approximately, 15% of its size. The method was compared with other methodologies based on the M strategy, and also with a core collection generated by random sampling. Higher proportions of retained alleles (in a core collection of equal size) or similar proportions of retained alleles (in a core collection of smaller size) were detected in the two methods based on the M strategy compared to the proposed methodology. Core sub-collections constructed by different methods were compared regarding the increase or maintenance of phenotypic diversity. No change on phenotypic diversity was detected by measuring the trait "Weight of 100 Seeds", for the tested sampling methods. Effects on linkage disequilibrium between unlinked microsatellite loci, due to sampling, are discussed. Conclusions Building of a thematic core collection was here defined by prior selection of accessions which are diverse for the trait of interest, and then by pairwise genetic distances, estimated by DNA polymorphism analysis at molecular marker loci. The resulting thematic core collection potentially reflects the maximum allele richness with the smallest sample size from a larger thematic collection. As an example, we used the development of a thematic core collection for drought tolerance in rice. It is expected that such thematic collections increase the use of germplasm by breeding programs and facilitate the study of the traits under consideration. The definition of a core collection to study drought resistance is a valuable contribution towards the understanding of the genetic control and the physiological mechanisms involved in water use efficiency in plants. PMID:20576152

  18. Cost-efficient designs for three-arm trials with treatment delivered by health professionals: Sample sizes for a combination of nested and crossed designs

    PubMed Central

    Moerbeek, Mirjam

    2018-01-01

    Background This article studies the design of trials that compare three treatment conditions that are delivered by two types of health professionals. The one type of health professional delivers one treatment, and the other type delivers two treatments, hence, this design is a combination of a nested and crossed design. As each health professional treats multiple patients, the data have a nested structure. This nested structure has thus far been ignored in the design of such trials, which may result in an underestimate of the required sample size. In the design stage, the sample sizes should be determined such that a desired power is achieved for each of the three pairwise comparisons, while keeping costs or sample size at a minimum. Methods The statistical model that relates outcome to treatment condition and explicitly takes the nested data structure into account is presented. Mathematical expressions that relate sample size to power are derived for each of the three pairwise comparisons on the basis of this model. The cost-efficient design achieves sufficient power for each pairwise comparison at lowest costs. Alternatively, one may minimize the total number of patients. The sample sizes are found numerically and an Internet application is available for this purpose. The design is also compared to a nested design in which each health professional delivers just one treatment. Results Mathematical expressions show that this design is more efficient than the nested design. For each pairwise comparison, power increases with the number of health professionals and the number of patients per health professional. The methodology of finding a cost-efficient design is illustrated using a trial that compares treatments for social phobia. The optimal sample sizes reflect the costs for training and supervising psychologists and psychiatrists, and the patient-level costs in the three treatment conditions. Conclusion This article provides the methodology for designing trials that compare three treatment conditions while taking the nesting of patients within health professionals into account. As such, it helps to avoid underpowered trials. To use the methodology, a priori estimates of the total outcome variances and intraclass correlation coefficients must be obtained from experts’ opinions or findings in the literature. PMID:29316807

  19. Effect size and statistical power in the rodent fear conditioning literature - A systematic review.

    PubMed

    Carneiro, Clarissa F D; Moulin, Thiago C; Macleod, Malcolm R; Amaral, Olavo B

    2018-01-01

    Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science.

  20. Effect size and statistical power in the rodent fear conditioning literature – A systematic review

    PubMed Central

    Macleod, Malcolm R.

    2018-01-01

    Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science. PMID:29698451

  1. 45 CFR 1356.84 - Sampling.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Correction (FPC). The State agency must increase the resulting number by 30 percent to allow for attrition... 30 percent to allow for attrition, but the sample size must not be larger than the number of youth...

  2. 45 CFR 1356.84 - Sampling.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Correction (FPC). The State agency must increase the resulting number by 30 percent to allow for attrition... 30 percent to allow for attrition, but the sample size must not be larger than the number of youth...

  3. 45 CFR 1356.84 - Sampling.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Correction (FPC). The State agency must increase the resulting number by 30 percent to allow for attrition... 30 percent to allow for attrition, but the sample size must not be larger than the number of youth...

  4. Size-selective separation of submicron particles in suspensions with ultrasonic atomization.

    PubMed

    Nii, Susumu; Oka, Naoyoshi

    2014-11-01

    Aqueous suspensions containing silica or polystyrene latex were ultrasonically atomized for separating particles of a specific size. With the help of a fog involving fine liquid droplets with a narrow size distribution, submicron particles in a limited size-range were successfully separated from suspensions. Performance of the separation was characterized by analyzing the size and the concentration of collected particles with a high resolution method. Irradiation of 2.4MHz ultrasound to sample suspensions allowed the separation of particles of specific size from 90 to 320nm without regarding the type of material. Addition of a small amount of nonionic surfactant, PONPE20 to SiO2 suspensions enhanced the collection of finer particles, and achieved a remarkable increase in the number of collected particles. Degassing of the sample suspension resulted in eliminating the separation performance. Dissolved air in suspensions plays an important role in this separation. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. A novel approach for small sample size family-based association studies: sequential tests.

    PubMed

    Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan

    2011-08-01

    In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.

  6. Annual design-based estimation for the annualized inventories of forest inventory and analysis: sample size determination

    Treesearch

    Hans T. Schreuder; Jin-Mann S. Lin; John Teply

    2000-01-01

    The Forest Inventory and Analysis units in the USDA Forest Service have been mandated by Congress to go to an annualized inventory where a certain percentage of plots, say 20 percent, will be measured in each State each year. Although this will result in an annual sample size that will be too small for reliable inference for many areas, it is a sufficiently large...

  7. Experimental Effects on IR Reflectance Spectra: Particle Size and Morphology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beiswenger, Toya N.; Myers, Tanya L.; Brauer, Carolyn S.

    For geologic and extraterrestrial samples it is known that both particle size and morphology can have strong effects on the species’ infrared reflectance spectra. Due to such effects, the reflectance spectra cannot be predicted from the absorption coefficients alone. This is because reflectance is both a surface as well as a bulk phenomenon, incorporating both dispersion as well as absorption effects. The same spectral features can even be observed as either a maximum or minimum. The complex effects depend on particle size and preparation, as well as the relative amplitudes of the optical constants n and k, i.e. the realmore » and imaginary components of the complex refractive index. While somewhat oversimplified, upward-going amplitude in the reflectance spectrum usually result from surface scattering, i.e. rays that have been reflected from the surface without penetration, whereas downward-going peaks are due to either absorption or volume scattering, i.e. rays that have penetrated or refracted into the sample interior and are not reflected. While the effects are well known, we report seminal measurements of reflectance along with quantified particle size of the samples, the sizing obtained from optical microscopy measurements. The size measurements are correlated with the reflectance spectra in the 1.3 – 16 micron range for various bulk materials that have a combination of strong and weak absorption bands in order to understand the effects on the spectral features as a function of the mean grain size of the sample. We report results for both sodium sulfate Na2SO4 as well as ammonium sulfate (NH4)2SO4; the optical constants have been measured for (NH4)2SO4. To go a step further from the field to the laboratory we explore our understanding of particle size effects on reflectance spectra in the field using standoff detection. This has helped identify weaknesses and strengths in detection using standoff distances of up 160 meters away from the Target. The studies have shown that particle size has an enormous influence on the measured reflectance spectra of such materials; successful identification requires sufficient, representative reflectance data to include the particle sizes of interest.« less

  8. Optimal number of features as a function of sample size for various classification rules.

    PubMed

    Hua, Jianping; Xiong, Zixiang; Lowey, James; Suh, Edward; Dougherty, Edward R

    2005-04-15

    Given the joint feature-label distribution, increasing the number of features always results in decreased classification error; however, this is not the case when a classifier is designed via a classification rule from sample data. Typically (but not always), for fixed sample size, the error of a designed classifier decreases and then increases as the number of features grows. The potential downside of using too many features is most critical for small samples, which are commonplace for gene-expression-based classifiers for phenotype discrimination. For fixed sample size and feature-label distribution, the issue is to find an optimal number of features. Since only in rare cases is there a known distribution of the error as a function of the number of features and sample size, this study employs simulation for various feature-label distributions and classification rules, and across a wide range of sample and feature-set sizes. To achieve the desired end, finding the optimal number of features as a function of sample size, it employs massively parallel computation. Seven classifiers are treated: 3-nearest-neighbor, Gaussian kernel, linear support vector machine, polynomial support vector machine, perceptron, regular histogram and linear discriminant analysis. Three Gaussian-based models are considered: linear, nonlinear and bimodal. In addition, real patient data from a large breast-cancer study is considered. To mitigate the combinatorial search for finding optimal feature sets, and to model the situation in which subsets of genes are co-regulated and correlation is internal to these subsets, we assume that the covariance matrix of the features is blocked, with each block corresponding to a group of correlated features. Altogether there are a large number of error surfaces for the many cases. These are provided in full on a companion website, which is meant to serve as resource for those working with small-sample classification. For the companion website, please visit http://public.tgen.org/tamu/ofs/ e-dougherty@ee.tamu.edu.

  9. Chemical Characterization of an Envelope A Sample from Hanford Tank 241-AN-103

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hay, M.S.

    2000-08-23

    A whole tank composite sample from Hanford waste tank 241-AN-103 was received at the Savannah River Technology Center (SRTC) and chemically characterized. Prior to characterization the sample was diluted to {approximately}5 M sodium concentration. The filtered supernatant liquid, the total dried solids of the diluted sample, and the washed insoluble solids obtained from filtration of the diluted sample were analyzed. A mass balance calculation of the three fractions of the sample analyzed indicate the analytical results appear relatively self-consistent for major components of the sample. However, some inconsistency was observed between results where more than one method of determination wasmore » employed and for species present in low concentrations. A direct comparison to previous analyses of material from tank 241-AN-103 was not possible due to unavailability of data for diluted samples of tank 241-AN-103 whole tank composites. However, the analytical data for other types of samples from 241-AN-103 we re mathematically diluted and compare reasonably with the current results. Although the segments of the core samples used to prepare the sample received at SRTC were combined in an attempt to produce a whole tank composite, determination of how well the results of the current analysis represent the actual composition of the Hanford waste tank 241-AN-103 remains problematic due to the small sample size and the large size of the non-homogenized waste tank.« less

  10. Air Flow and Pressure Drop Measurements Across Porous Oxides

    NASA Technical Reports Server (NTRS)

    Fox, Dennis S.; Cuy, Michael D.; Werner, Roger A.

    2008-01-01

    This report summarizes the results of air flow tests across eight porous, open cell ceramic oxide samples. During ceramic specimen processing, the porosity was formed using the sacrificial template technique, with two different sizes of polystyrene beads used for the template. The samples were initially supplied with thicknesses ranging from 0.14 to 0.20 in. (0.35 to 0.50 cm) and nonuniform backside morphology (some areas dense, some porous). Samples were therefore ground to a thickness of 0.12 to 0.14 in. (0.30 to 0.35 cm) using dry 120 grit SiC paper. Pressure drop versus air flow is reported. Comparisons of samples with thickness variations are made, as are pressure drop estimates. As the density of the ceramic material increases the maximum corrected flow decreases rapidly. Future sample sets should be supplied with samples of similar thickness and having uniform surface morphology. This would allow a more consistent determination of air flow versus processing parameters and the resulting porosity size and distribution.

  11. 50 CFR 648.90 - NE multispecies assessment, framework procedures and specifications, and flexible area action...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...; survey results; stock status; current estimates of fishing mortality and overfishing levels; social and... survey data or, if sea sampling data are unavailable, length frequency information from trawl surveys... size; sea sampling, port sampling, and survey data or, if sea sampling data are unavailable, length...

  12. Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.

    PubMed

    Rochon, K; Scoles, G A; Lysyk, T J

    2012-03-01

    A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.

  13. Multiple antiferromagnet/ferromagnet interfaces as a probe of grain-size-dependent exchange bias in polycrystalline Co/Fe 50Mn 50

    NASA Astrophysics Data System (ADS)

    Bolon, Bruce T.; Haugen, M. A.; Abin-Fuentes, A.; Deneen, J.; Carter, C. B.; Leighton, C.

    2007-02-01

    We have used ferromagnet/antiferromagnet/ferromagnet trilayers and ferromagnet/antiferromagnet multilayers to probe the grain size dependence of exchange bias in polycrystalline Co/Fe 50Mn 50. X-ray diffraction and transmission electron microscopy show that the Fe 50Mn 50 (FeMn) grain size increases with increasing FeMn thickness in the Co (30 Å)/FeMn system. Hence, in Co(30 Å)/FeMn( tAF Å)/Co(30 Å) trilayers the two Co layers sample different FeMn grain sizes at the two antiferromagnet/ferromagnet interfaces. For FeMn thicknesses above 100 Å, where simple bilayers have a thickness-independent exchange bias, we are therefore able to deduce the influence of FeMn grain size on the exchange bias and coercivity (and their temperature dependence) simply by measuring trilayer and multilayer samples with varying FeMn thicknesses. This can be done while maintaining the (1 1 1) orientation, and with little variation in interface roughness. Increasing the average grain size from 90 to 135 Å results in a fourfold decrease in exchange bias, following an inverse grain size dependence. We interpret the results as being due to a decrease in uncompensated spin density with increasing antiferromagnet grain size, further evidence for the importance of defect-generated uncompensated spins.

  14. Sub-sampling genetic data to estimate black bear population size: A case study

    USGS Publications Warehouse

    Tredick, C.A.; Vaughan, M.R.; Stauffer, D.F.; Simek, S.L.; Eason, T.

    2007-01-01

    Costs for genetic analysis of hair samples collected for individual identification of bears average approximately US$50 [2004] per sample. This can easily exceed budgetary allowances for large-scale studies or studies of high-density bear populations. We used 2 genetic datasets from 2 areas in the southeastern United States to explore how reducing costs of analysis by sub-sampling affected precision and accuracy of resulting population estimates. We used several sub-sampling scenarios to create subsets of the full datasets and compared summary statistics, population estimates, and precision of estimates generated from these subsets to estimates generated from the complete datasets. Our results suggested that bias and precision of estimates improved as the proportion of total samples used increased, and heterogeneity models (e.g., Mh[CHAO]) were more robust to reduced sample sizes than other models (e.g., behavior models). We recommend that only high-quality samples (>5 hair follicles) be used when budgets are constrained, and efforts should be made to maximize capture and recapture rates in the field.

  15. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    PubMed

    Almutairy, Meznah; Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  16. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches

    PubMed Central

    Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989

  17. Simple, Defensible Sample Sizes Based on Cost Efficiency

    PubMed Central

    Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.

    2009-01-01

    Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055

  18. RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.

    PubMed

    Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu

    2018-05-30

    One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.

  19. The special case of the 2 × 2 table: asymptotic unconditional McNemar test can be used to estimate sample size even for analysis based on GEE.

    PubMed

    Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu

    2015-07-01

    Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Molecular size-dependent abundance and composition of dissolved organic matter in river, lake and sea waters.

    PubMed

    Xu, Huacheng; Guo, Laodong

    2017-06-15

    Dissolved organic matter (DOM) is ubiquitous in natural waters. The ecological role and environmental fate of DOM are highly related to the chemical composition and size distribution. To evaluate size-dependent DOM quantity and quality, water samples were collected from river, lake, and coastal marine environments and size fractionated through a series of micro- and ultra-filtrations with different membranes having different pore-sizes/cutoffs, including 0.7, 0.4, and 0.2 μm and 100, 10, 3, and 1 kDa. Abundance of dissolved organic carbon, total carbohydrates, chromophoric and fluorescent components in the filtrates decreased consistently with decreasing filter/membrane cutoffs, but with a rapid decline when the filter cutoff reached 3 kDa, showing an evident size-dependent DOM abundance and composition. About 70% of carbohydrates and 90% of humic- and protein-like components were measured in the <3 kDa fraction in freshwater samples, but these percentages were higher in the seawater sample. Spectroscopic properties of DOM, such as specific ultraviolet absorbance, spectral slope, and biological and humification indices also varied significantly with membrane cutoffs. In addition, different ultrafiltration membranes with the same manufacture-rated cutoff also gave rise to different DOM retention efficiencies and thus different colloidal abundances and size spectra. Thus, the size-dependent DOM properties were related to both sample types and membranes used. Our results here provide not only baseline data for filter pore-size selection when exploring DOM ecological and environmental roles, but also new insights into better understanding the physical definition of DOM and its size continuum in quantity and quality in aquatic environments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Effect of Study Design on Sample Size in Studies Intended to Evaluate Bioequivalence of Inhaled Short‐Acting β‐Agonist Formulations

    PubMed Central

    Zeng, Yaohui; Singh, Sachinkumar; Wang, Kai

    2017-01-01

    Abstract Pharmacodynamic studies that use methacholine challenge to assess bioequivalence of generic and innovator albuterol formulations are generally designed per published Food and Drug Administration guidance, with 3 reference doses and 1 test dose (3‐by‐1 design). These studies are challenging and expensive to conduct, typically requiring large sample sizes. We proposed 14 modified study designs as alternatives to the Food and Drug Administration–recommended 3‐by‐1 design, hypothesizing that adding reference and/or test doses would reduce sample size and cost. We used Monte Carlo simulation to estimate sample size. Simulation inputs were selected based on published studies and our own experience with this type of trial. We also estimated effects of these modified study designs on study cost. Most of these altered designs reduced sample size and cost relative to the 3‐by‐1 design, some decreasing cost by more than 40%. The most effective single study dose to add was 180 μg of test formulation, which resulted in an estimated 30% relative cost reduction. Adding a single test dose of 90 μg was less effective, producing only a 13% cost reduction. Adding a lone reference dose of either 180, 270, or 360 μg yielded little benefit (less than 10% cost reduction), whereas adding 720 μg resulted in a 19% cost reduction. Of the 14 study design modifications we evaluated, the most effective was addition of both a 90‐μg test dose and a 720‐μg reference dose (42% cost reduction). Combining a 180‐μg test dose and a 720‐μg reference dose produced an estimated 36% cost reduction. PMID:29281130

  2. Tissue recommendations for precision cancer therapy using next generation sequencing: a comprehensive single cancer center’s experiences

    PubMed Central

    Hong, Mineui; Bang, Heejin; Van Vrancken, Michael; Kim, Seungtae; Lee, Jeeyun; Park, Se Hoon; Park, Joon Oh; Park, Young Suk; Lim, Ho Yeong; Kang, Won Ki; Sun, Jong-Mu; Lee, Se Hoon; Ahn, Myung-Ju; Park, Keunchil; Kim, Duk Hwan; Lee, Seunggwan; Park, Woongyang; Kim, Kyoung-Mee

    2017-01-01

    To generate accurate next-generation sequencing (NGS) data, the amount and quality of DNA extracted is critical. We analyzed 1564 tissue samples from patients with metastatic or recurrent solid tumor submitted for NGS according to their sample size, acquisition method, organ, and fixation to propose appropriate tissue requirements. Of the 1564 tissue samples, 481 (30.8%) consisted of fresh-frozen (FF) tissue, and 1,083 (69.2%) consisted of formalin-fixed paraffin-embedded (FFPE) tissue. We obtained successful NGS results in 95.9% of cases. Out of 481 FF biopsies, 262 tissue samples were from lung, and the mean fragment size was 2.4 mm. Compared to lung, GI tract tumor fragments showed a significantly lower DNA extraction failure rate (2.1 % versus 6.1%, p = 0.04). For FFPE biopsy samples, the size of biopsy tissue was similar regardless of tumor type with a mean of 0.8 × 0.3 cm, and the mean DNA yield per one unstained slide was 114 ng. We obtained highest amount of DNA from the colorectum (2353 ng) and the lowest amount from the hepatobiliary tract (760.3 ng) likely due to a relatively smaller biopsy size, extensive hemorrhage and necrosis, and lower tumor volume. On one unstained slide from FFPE operation specimens, the mean size of the specimen was 2.0 × 1.0 cm, and the mean DNA yield per one unstained slide was 1800 ng. In conclusions, we present our experiences on tissue requirements for appropriate NGS workflow: > 1 mm2 for FF biopsy, > 5 unstained slides for FFPE biopsy, and > 1 unstained slide for FFPE operation specimens for successful test results in 95.9% of cases. PMID:28477007

  3. Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review.

    PubMed

    McKeown, Andrew; Gewandter, Jennifer S; McDermott, Michael P; Pawlowski, Joseph R; Poli, Joseph J; Rothstein, Daniel; Farrar, John T; Gilron, Ian; Katz, Nathaniel P; Lin, Allison H; Rappaport, Bob A; Rowbotham, Michael C; Turk, Dennis C; Dworkin, Robert H; Smith, Shannon M

    2015-03-01

    Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. Copyright © 2015 American Pain Society. All rights reserved.

  4. Effect of sulfate and carbonate minerals on particle-size distributions in arid soils

    USGS Publications Warehouse

    Goossens, Dirk; Buck, Brenda J.; Teng, Yuazxin; Robins, Colin; Goldstein, Harland L.

    2014-01-01

    Arid soils pose unique problems during measurement and interpretation of particle-size distributions (PSDs) because they often contain high concentrations of water-soluble salts. This study investigates the effects of sulfate and carbonate minerals on grain-size analysis by comparing analyses in water, in which the minerals dissolve, and isopropanol (IPA), in which they do not. The presence of gypsum, in particular, substantially affects particle-size analysis once the concentration of gypsum in the sample exceeds the mineral’s solubility threshold. For smaller concentrations particle-size results are unaffected. This is because at concentrations above the solubility threshold fine particles cement together or bind to coarser particles or aggregates already present in the sample, or soluble mineral coatings enlarge grains. Formation of discrete crystallites exacerbates the problem. When soluble minerals are dissolved the original, insoluble grains will become partly or entirely liberated. Thus, removing soluble minerals will result in an increase in measured fine particles. Distortion of particle-size analysis is larger for sulfate minerals than for carbonate minerals because of the much higher solubility in water of the former. When possible, arid soils should be analyzed using a liquid in which the mineral grains do not dissolve, such as IPA, because the results will more accurately reflect the PSD under most arid soil field conditions. This is especially important when interpreting soil and environmental processes affected by particle size.

  5. Optimising a modified free-space permittivity characterisation method for civil engineering applications

    NASA Astrophysics Data System (ADS)

    Muller, Wayne; Scheuermann, Alexander

    2016-04-01

    Measuring the electrical permittivity of civil engineering materials is important for a range of ground penetrating radar (GPR) and pavement moisture measurement applications. Compacted unbound granular (UBG) pavement materials present a number of preparation and measurement challenges using conventional characterisation techniques. As an alternative to these methods, a modified free-space (MFS) characterisation approach has previously been investigated. This paper describes recent work to optimise and validate the MFS technique. The research included finite difference time domain (FDTD) modelling to better understand the nature of wave propagation within material samples and the test apparatus. This research led to improvements in the test approach and optimisation of sample sizes. The influence of antenna spacing and sample thickness on the permittivity results was investigated by a series of experiments separating antennas and measuring samples of nylon and water. Permittivity measurements of samples of nylon and water approximately 100 mm and 170 mm thick were also compared, showing consistent results. These measurements also agreed well with surface probe measurements of the nylon sample and literature values for water. The results indicate permittivity estimates of acceptable accuracy can be obtained using the proposed approach, apparatus and sample sizes.

  6. Stoichiometry of Cd(S,Se) nanocrystals by anomalous small-angle x-ray scattering

    NASA Astrophysics Data System (ADS)

    Ramos, Aline; Lyon, Olivier; Levelut, Claire

    1995-12-01

    In Cd(S,Se)-doped glasses the optical properties are strongly dependent on the size of the nanocrystals, but can be also largely modified by changes in the crystal stoichiometry; however, the information on both stoichiometry and size is difficult to obtain in crystals smaller than 10 nm. The intensity scattered at small angles is classically used to get information about nanoparticles sizes. Moreover the variation of amplitude of this intensity with the energy of the x ray—``the anomalous effect''—near the selenium edge is related to stoichiometry. Anomalous small-angle x-ray scattering has been used as a tentative method to get information about stoichiometry in nanocrystals with size lower than 10 nm. Experiments have been performed on samples treated for 2 days at temperatures in the range 540-650 °C. The samples treated at temperatures above 580 °C contain crystals with size larger than 4 nm. For all these samples the anomalous effect has nearly the same amplitude, and we found the stoichiometry x=0.4 for the CdSxSe1-x nanocrystals. This agrees with the previous results obtained by scanning electron microscopy and Raman spectroscopy. The results are also confirmed by measurements of the position of the optical absorption edge and by wide-angle x-ray scattering experiments. For the sample treated at 560 °C, the nanocrystal size is 3 nm and the stoichiometry x=0.6 is deduced from the anomalous effect. For samples treated at lower temperatures the anomalous effect is not observable, indicating an even lower selenium content in the nanocrystals (x≳0.7). We observed differences in the Se content of nanocrystals for different heat treatments of the same initial glass. These results may be very helpful to interpret the change in the optical properties when the temperature of the treatments decreases in the range 560-590 °C. In this temperature range, compositional effects seem to be of the same order of magnitude as the effects of the quantum confinement.

  7. Effect of particle size and percentages of Boron carbide on the thermal neutron radiation shielding properties of HDPE/B4C composite: Experimental and simulation studies

    NASA Astrophysics Data System (ADS)

    Soltani, Zahra; Beigzadeh, Amirmohammad; Ziaie, Farhood; Asadi, Eskandar

    2016-10-01

    In this paper the effects of particle size and weight percentage of the reinforcement phase on the absorption ability of thermal neutron by HDPE/B4C composites were investigated by means of Monte-Carlo simulation method using MCNP code and experimental studies. The composite samples were prepared using the HDPE filled with different weight percentages of Boron carbide powder in the form of micro and nano particles. Micro and nano composite were prepared under the similar mixing and moulding processes. The samples were subjected to thermal neutron radiation. Neutron shielding efficiency in terms of the neutron transmission fractions of the composite samples were investigated and compared with simulation results. According to the simulation results, the particle size of the radiation shielding material has an important role on the shielding efficiency. By decreasing the particle size of shielding material in each weight percentages of the reinforcement phase, better radiation shielding properties were obtained. It seems that, decreasing the particle size and homogeneous distribution of nano forms of B4C particles, cause to increase the collision probability between the incident thermal neutron and the shielding material which consequently improve the radiation shielding properties. So, this result, propose the feasibility of nano composite as shielding material to have a high performance shielding characteristic, low weight and low thick shielding along with economical benefit.

  8. Longitudinal design considerations to optimize power to detect variances and covariances among rates of change: Simulation results based on actual longitudinal studies

    PubMed Central

    Rast, Philippe; Hofer, Scott M.

    2014-01-01

    We investigated the power to detect variances and covariances in rates of change in the context of existing longitudinal studies using linear bivariate growth curve models. Power was estimated by means of Monte Carlo simulations. Our findings show that typical longitudinal study designs have substantial power to detect both variances and covariances among rates of change in a variety of cognitive, physical functioning, and mental health outcomes. We performed simulations to investigate the interplay among number and spacing of occasions, total duration of the study, effect size, and error variance on power and required sample size. The relation between growth rate reliability (GRR) and effect size to the sample size required to detect power ≥ .80 was non-linear, with rapidly decreasing sample sizes needed as GRR increases. The results presented here stand in contrast to previous simulation results and recommendations (Hertzog, Lindenberger, Ghisletta, & von Oertzen, 2006; Hertzog, von Oertzen, Ghisletta, & Lindenberger, 2008; von Oertzen, Ghisletta, & Lindenberger, 2010), which are limited due to confounds between study length and number of waves, error variance with GCR, and parameter values which are largely out of bounds of actual study values. Power to detect change is generally low in the early phases (i.e. first years) of longitudinal studies but can substantially increase if the design is optimized. We recommend additional assessments, including embedded intensive measurement designs, to improve power in the early phases of long-term longitudinal studies. PMID:24219544

  9. Discriminant Analysis of Defective and Non-Defective Field Pea (Pisum sativum L.) into Broad Market Grades Based on Digital Image Features.

    PubMed

    McDonald, Linda S; Panozzo, Joseph F; Salisbury, Phillip A; Ford, Rebecca

    2016-01-01

    Field peas (Pisum sativum L.) are generally traded based on seed appearance, which subjectively defines broad market-grades. In this study, we developed an objective Linear Discriminant Analysis (LDA) model to classify market grades of field peas based on seed colour, shape and size traits extracted from digital images. Seeds were imaged in a high-throughput system consisting of a camera and laser positioned over a conveyor belt. Six colour intensity digital images were captured (under 405, 470, 530, 590, 660 and 850nm light) for each seed, and surface height was measured at each pixel by laser. Colour, shape and size traits were compiled across all seed in each sample to determine the median trait values. Defective and non-defective seed samples were used to calibrate and validate the model. Colour components were sufficient to correctly classify all non-defective seed samples into correct market grades. Defective samples required a combination of colour, shape and size traits to achieve 87% and 77% accuracy in market grade classification of calibration and validation sample-sets respectively. Following these results, we used the same colour, shape and size traits to develop an LDA model which correctly classified over 97% of all validation samples as defective or non-defective.

  10. Discriminant Analysis of Defective and Non-Defective Field Pea (Pisum sativum L.) into Broad Market Grades Based on Digital Image Features

    PubMed Central

    McDonald, Linda S.; Panozzo, Joseph F.; Salisbury, Phillip A.; Ford, Rebecca

    2016-01-01

    Field peas (Pisum sativum L.) are generally traded based on seed appearance, which subjectively defines broad market-grades. In this study, we developed an objective Linear Discriminant Analysis (LDA) model to classify market grades of field peas based on seed colour, shape and size traits extracted from digital images. Seeds were imaged in a high-throughput system consisting of a camera and laser positioned over a conveyor belt. Six colour intensity digital images were captured (under 405, 470, 530, 590, 660 and 850nm light) for each seed, and surface height was measured at each pixel by laser. Colour, shape and size traits were compiled across all seed in each sample to determine the median trait values. Defective and non-defective seed samples were used to calibrate and validate the model. Colour components were sufficient to correctly classify all non-defective seed samples into correct market grades. Defective samples required a combination of colour, shape and size traits to achieve 87% and 77% accuracy in market grade classification of calibration and validation sample-sets respectively. Following these results, we used the same colour, shape and size traits to develop an LDA model which correctly classified over 97% of all validation samples as defective or non-defective. PMID:27176469

  11. Efficient inference of population size histories and locus-specific mutation rates from large-sample genomic variation data.

    PubMed

    Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S

    2015-02-01

    With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.

  12. Effects of normalization on quantitative traits in association test

    PubMed Central

    2009-01-01

    Background Quantitative trait loci analysis assumes that the trait is normally distributed. In reality, this is often not observed and one strategy is to transform the trait. However, it is not clear how much normality is required and which transformation works best in association studies. Results We performed simulations on four types of common quantitative traits to evaluate the effects of normalization using the logarithm, Box-Cox, and rank-based transformations. The impact of sample size and genetic effects on normalization is also investigated. Our results show that rank-based transformation gives generally the best and consistent performance in identifying the causal polymorphism and ranking it highly in association tests, with a slight increase in false positive rate. Conclusion For small sample size or genetic effects, the improvement in sensitivity for rank transformation outweighs the slight increase in false positive rate. However, for large sample size and genetic effects, normalization may not be necessary since the increase in sensitivity is relatively modest. PMID:20003414

  13. Mesh-size effects on drift sample composition as determined with a triple net sampler

    USGS Publications Warehouse

    Slack, K.V.; Tilley, L.J.; Kennelly, S.S.

    1991-01-01

    Nested nets of three different mesh apertures were used to study mesh-size effects on drift collected in a small mountain stream. The innermost, middle, and outermost nets had, respectively, 425 ??m, 209 ??m and 106 ??m openings, a design that reduced clogging while partitioning collections into three size groups. The open area of mesh in each net, from largest to smallest mesh opening, was 3.7, 5.7 and 8.0 times the area of the net mouth. Volumes of filtered water were determined with a flowmeter. The results are expressed as (1) drift retained by each net, (2) drift that would have been collected by a single net of given mesh size, and (3) the percentage of total drift (the sum of the catches from all three nets) that passed through the 425 ??m and 209 ??m nets. During a two day period in August 1986, Chironomidae larvae were dominant numerically in all 209 ??m and 106 ??m samples and midday 425 ??m samples. Large drifters (Ephemerellidae) occurred only in 425 ??m or 209 ??m nets, but the general pattern was an increase in abundance and number of taxa with decreasing mesh size. Relatively more individuals occurred in the larger mesh nets at night than during the day. The two larger mesh sizes retained 70% of the total sediment/detritus in the drift collections, and this decreased the rate of clogging of the 106 ??m net. If an objective of a sampling program is to compare drift density or drift rate between areas or sampling dates, the same mesh size should be used for all sample collection and processing. The mesh aperture used for drift collection should retain all species and life stages of significance in a study. The nested net design enables an investigator to test the adequacy of drift samples. ?? 1991 Kluwer Academic Publishers.

  14. Factors Affecting Pathogen Survival in Finished Dairy Compost with Different Particle Sizes Under Greenhouse Conditions.

    PubMed

    Diao, Junshu; Chen, Zhao; Gong, Chao; Jiang, Xiuping

    2015-09-01

    This study investigated the survival of Escherichia coli O157:H7 and Salmonella Typhimurium in finished dairy compost with different particle sizes during storage as affected by moisture content and temperature under greenhouse conditions. The mixture of E. coli O157:H7 and S. Typhimurium strains was inoculated into the finished composts with moisture contents of 20, 30, and 40%, separately. The finished compost samples were then sieved into 3 different particle sizes (>1000, 500-1000, and <500 μm) and stored under greenhouse conditions. For compost samples with moisture contents of 20 and 30%, the average Salmonella reductions in compost samples with particle sizes of >1000, 500-1000, and <500 μm were 2.15, 2.27, and 2.47 log colony-forming units (CFU) g(-1) within 5 days of storage in summer, respectively, as compared with 1.60, 2.03, and 2.26 log CFU g(-1) in late fall, respectively, and 2.61, 3.33, and 3.67 log CFU g(-1) in winter, respectively. The average E. coli O157:H7 reductions in compost samples with particle sizes of >1000, 500-1000, and <500 μm were 1.98, 2.30, and 2.54 log CFU g(-1) within 5 days of storage in summer, respectively, as compared with 1.70, 2.56, and 2.90 log CFU g(-1) in winter, respectively. Our results revealed that both Salmonella and E. coli O157:H7 in compost samples with larger particle size survived better than those with smaller particle sizes, and the initial rapid moisture loss in compost may contribute to the fast inactivation of pathogens in the finished compost. For the same season, the pathogens in the compost samples with the same particle size survived much better at the initial moisture content of 20% compared to 40%.

  15. Tooth Size Variation Related to Age in Amboseli Baboons

    PubMed Central

    Galbany, Jordi; Dotras, Laia; Alberts, Susan C.; Pérez-Pérez, Alejandro

    2011-01-01

    We measured the molar size from a single population of wild baboons from Amboseli (Kenya), both females (n = 57) and males (n = 50). All the females were of known age; the males represented a mix of known-age individuals (n = 31) and individuals with ages estimated to within 2 years (n = 19). The results showed a significant reduction in the mesiodistal length of teeth in both sexes as a function of age. Overall patterns of age-related change in tooth size did not change whether we included or excluded the individuals of estimated age, but patterns of statistical significance changed as a result of changed sample sizes. Our results demonstrate that tooth length is directly related to age due to interproximal wearing caused by M2 and M3 compression loads. Dental studies in primates, including both fossil and extant species, are mostly based on specimens obtained from osteological collections of varying origins, for which the age at death of each individual in the sample is not known. Researchers should take into account the phenomenon of interproximal attrition leading to reduced tooth size when measuring tooth length for ondontometric purposes. PMID:21325862

  16. Design of gefitinib-loaded poly (l-lactic acid) microspheres via a supercritical anti-solvent process for dry powder inhalation.

    PubMed

    Lin, Qing; Liu, Guijin; Zhao, Ziyi; Wei, Dongwei; Pang, Jiafeng; Jiang, Yanbin

    2017-10-30

    To develop a safer, more stable and potent formulation of gefitinib (GFB), micro-spheres of GFB encapsulated into poly (l-lactic acid) (PLLA) have been prepared by supercritical anti-solvent (SAS) technology in this study. Operating factors were optimized using a selected OA 16 (4 5 ) orthogonal array design, and the properties of the raw material and SAS processed samples were characterized by different methods The results show that the GFB-loaded PLLA particles prepared were spherical, having a smaller and narrower particle size compared with raw GFB. The optimal GFB-loaded PLLA sample was prepared with less aggregation, highest GFB loading (15.82%) and smaller size (D 50 =2.48μm, which meets the size of dry powder inhalers). The results of XRD and DSC indicate that GFB is encapsulated into PLLA matrix in a polymorphic form different from raw GFB. FT-IR results show that the chemical structure of GFB does not change after the SAS process. The results of in vitro release show that the optimal sample release was slower compared with raw GFB particles. Moreover, the results of in vitro anti-cancer trials show that the optimal sample had a higher cytotoxicity than raw GFB. After blending with sieved lactose, the flowability and aerosolization performance of the optimal sample for DPI were improved, with angle of repose, emitted dose and fine particles fractions from 38.4° to 23°, 63.21% to >90%, 23.37% to >30%, respectively. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Does the bathing water classification depend on sampling strategy? A bootstrap approach for bathing water quality assessment, according to Directive 2006/7/EC requirements.

    PubMed

    López, Iago; Alvarez, César; Gil, José L; Revilla, José A

    2012-11-30

    Data on the 95th and 90th percentiles of bacteriological quality indicators are used to classify bathing waters in Europe, according to the requirements of Directive 2006/7/EC. However, percentile values and consequently, classification of bathing waters depend both on sampling effort and sample-size, which may undermine an appropriate assessment of bathing water classification. To analyse the influence of sampling effort and sample size on water classification, a bootstrap approach was applied to 55 bacteriological quality datasets of several beaches in the Balearic Islands (Spain). Our results show that the probability of failing the regulatory standards of the Directive is high when sample size is low, due to a higher variability in percentile values. In this way, 49% of the bathing waters reaching an "Excellent" classification (95th percentile of Escherichia coli under 250 cfu/100 ml) can fail the "Excellent" regulatory standard due to sampling strategy, when 23 samples per season are considered. This percentage increases to 81% when 4 samples per season are considered. "Good" regulatory standards can also be failed in bathing waters with an "Excellent" classification as a result of these sampling strategies. The variability in percentile values may affect bathing water classification and is critical for the appropriate design and implementation of bathing water Quality Monitoring and Assessment Programs. Hence, variability of percentile values should be taken into account by authorities if an adequate management of these areas is to be achieved. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Coalescence computations for large samples drawn from populations of time-varying sizes

    PubMed Central

    Polanski, Andrzej; Szczesna, Agnieszka; Garbulowski, Mateusz; Kimmel, Marek

    2017-01-01

    We present new results concerning probability distributions of times in the coalescence tree and expected allele frequencies for coalescent with large sample size. The obtained results are based on computational methodologies, which involve combining coalescence time scale changes with techniques of integral transformations and using analytical formulae for infinite products. We show applications of the proposed methodologies for computing probability distributions of times in the coalescence tree and their limits, for evaluation of accuracy of approximate expressions for times in the coalescence tree and expected allele frequencies, and for analysis of large human mitochondrial DNA dataset. PMID:28170404

  19. Calibration correction of an active scattering spectrometer probe to account for refractive index of stratospheric aerosols

    NASA Technical Reports Server (NTRS)

    Pueschel, R. F.; Overbeck, V. R.; Snetsinger, K. G.; Russell, P. B.; Ferry, G. V.

    1990-01-01

    The use of the active scattering spectrometer probe (ASAS-X) to measure sulfuric acid aerosols on U-2 and ER-2 research aircraft has yielded results that are at times ambiguous due to the dependence of particles' optical signatures on refractive index as well as physical dimensions. The calibration correction of the ASAS-X optical spectrometer probe for stratospheric aerosol studies is validated through an independent and simultaneous sampling of the particles with impactors; sizing and counting of particles on SEM images yields total particle areas and volumes. Upon correction of calibration in light of these data, spectrometer results averaged over four size distributions are found to agree with similarly averaged impactor results to within a few percent: indicating that the optical properties or chemical composition of the sample aerosol must be known in order to achieve accurate optical aerosol spectrometer size analysis.

  20. The impact of hypnotic suggestibility in clinical care settings

    PubMed Central

    Montgomery, Guy H.; Schnur, Julie B.; David, Daniel

    2013-01-01

    Hypnotic suggestibility has been described as a powerful predictor of outcomes associated with hypnotic interventions. However, there have been no systematic approaches to quantifying this effect across the literature. The present meta-analysis evaluates the magnitude of the effect of hypnotic suggestibility on hypnotic outcomes in clinical settings. PsycINFO and PubMed were searched from their inception through July 2009. Thirty-four effects from ten studies and 283 participants are reported. Results revealed a statistically significant overall effect size in the small to medium range (r = 0.24; 95% Confidence Interval = −0.28 to 0.75), indicating that greater hypnotic suggestibility led to greater effects of hypnosis interventions. Hypnotic suggestibility accounted for 6% of the variance in outcomes. Smaller sample size studies, use of the SHCS, and pediatric samples tended to result in larger effect sizes. Results question the usefulness of assessing hypnotic suggestibility in clinical contexts. PMID:21644122

  1. Experimental and numerical modeling research of rubber material during microwave heating process

    NASA Astrophysics Data System (ADS)

    Chen, Hailong; Li, Tao; Li, Kunling; Li, Qingling

    2018-05-01

    This paper aims to investigate the heating behaviors of block rubber by experimental and simulated method. The COMSOL Multiphysics 5.0 software was utilized in numerical simulation work. The effects of microwave frequency, power and sample size on temperature distribution are examined. The effect of frequency on temperature distribution is obvious. The maximum and minimum temperatures of block rubber increase first and then decrease with frequency increasing. The microwave heating efficiency is maximum in the microwave frequency of 2450 MHz. However, more uniform temperature distribution is presented in other microwave frequencies. The influence of microwave power on temperature distribution is also remarkable. The smaller the power, the more uniform the temperature distribution on the block rubber. The effect of power on microwave heating efficiency is not obvious. The effect of sample size on temperature distribution is evidently found. The smaller the sample size, the more uniform the temperature distribution on the block rubber. However, the smaller the sample size, the lower the microwave heating efficiency. The results can serve as references for the research on heating rubber material by microwave technology.

  2. VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS

    PubMed Central

    Huang, Jian; Horowitz, Joel L.; Wei, Fengrong

    2010-01-01

    We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is “small” relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method. PMID:21127739

  3. Integrated investigation of the mixed origin of lunar sample 72161,11

    NASA Technical Reports Server (NTRS)

    Basu, A.; Des Marais, D. J.; Hayes, J. M.; Meinschein, W. G.

    1975-01-01

    The comminution-agglutination model and the solar-wind implantation-retention model are used to postulate the origins of the particulate components of lunar sample (72161,11), a submillimeter fraction of a surface sample for the dark mantle regolith at LRV-3. Grain-size analysis was performed by wet sieving with liquid argon, and analyses for CO2, CO, CH4, and H2 were carried out by stepwise pyrolysis in a helium atmosphere. The results indicate that the present sample is from a mature regolith, but the agglutinate content is only 30% in the particle-size range between 90 and 177 microns, indicating an apparent departure from steady state. Analyses of the carbon, methane, and hydrogen concentrations in size fractions larger than 149 microns show that the volume-correlated component of these species increases with increased grain size. It is suggested that the observed increase can be explained in terms of mixing of a dominant local population of coarser agglutinates having high carbon and hydrogen concentrations with an imported population of finer agglutinates relatively poor in carbon and hydrogen.

  4. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Consideration of Kaolinite Interference Correction for Quartz Measurements in Coal Mine Dust

    PubMed Central

    Lee, Taekhee; Chisholm, William P.; Kashon, Michael; Key-Schwartz, Rosa J.; Harper, Martin

    2015-01-01

    Kaolinite interferes with the infrared analysis of quartz. Improper correction can cause over- or underestimation of silica concentration. The standard sampling method for quartz in coal mine dust is size selective, and, since infrared spectrometry is sensitive to particle size, it is intuitively better to use the same size fractions for quantification of quartz and kaolinite. Standard infrared spectrometric methods for quartz measurement in coal mine dust correct interference from the kaolinite, but they do not specify a particle size for the material used for correction. This study compares calibration curves using as-received and respirable size fractions of nine different examples of kaolinite in the different correction methods from the National Institute for Occupational Safety and Health Manual of Analytical Methods (NMAM) 7603 and the Mine Safety and Health Administration (MSHA) P-7. Four kaolinites showed significant differences between calibration curves with as-received and respirable size fractions for NMAM 7603 and seven for MSHA P-7. The quartz mass measured in 48 samples spiked with respirable fraction silica and kaolinite ranged between 0.28 and 23% (NMAM 7603) and 0.18 and 26% (MSHA P-7) of the expected applied mass when the kaolinite interference was corrected with respirable size fraction kaolinite. This is termed “deviation,” not bias, because the applied mass is also subject to unknown variance. Generally, the deviations in the spiked samples are larger when corrected with the as-received size fraction of kaolinite than with the respirable size fraction. Results indicate that if a kaolinite correction with reference material of respirable size fraction is applied in current standard methods for quartz measurement in coal mine dust, the quartz result would be somewhat closer to the true exposure, although the actual mass difference would be small. Most kinds of kaolinite can be used for laboratory calibration, but preferably, the size fraction should be the same as the coal dust being collected. PMID:23767881

  6. Consideration of kaolinite interference correction for quartz measurements in coal mine dust.

    PubMed

    Lee, Taekhee; Chisholm, William P; Kashon, Michael; Key-Schwartz, Rosa J; Harper, Martin

    2013-01-01

    Kaolinite interferes with the infrared analysis of quartz. Improper correction can cause over- or underestimation of silica concentration. The standard sampling method for quartz in coal mine dust is size selective, and, since infrared spectrometry is sensitive to particle size, it is intuitively better to use the same size fractions for quantification of quartz and kaolinite. Standard infrared spectrometric methods for quartz measurement in coal mine dust correct interference from the kaolinite, but they do not specify a particle size for the material used for correction. This study compares calibration curves using as-received and respirable size fractions of nine different examples of kaolinite in the different correction methods from the National Institute for Occupational Safety and Health Manual of Analytical Methods (NMAM) 7603 and the Mine Safety and Health Administration (MSHA) P-7. Four kaolinites showed significant differences between calibration curves with as-received and respirable size fractions for NMAM 7603 and seven for MSHA P-7. The quartz mass measured in 48 samples spiked with respirable fraction silica and kaolinite ranged between 0.28 and 23% (NMAM 7603) and 0.18 and 26% (MSHA P-7) of the expected applied mass when the kaolinite interference was corrected with respirable size fraction kaolinite. This is termed "deviation," not bias, because the applied mass is also subject to unknown variance. Generally, the deviations in the spiked samples are larger when corrected with the as-received size fraction of kaolinite than with the respirable size fraction. Results indicate that if a kaolinite correction with reference material of respirable size fraction is applied in current standard methods for quartz measurement in coal mine dust, the quartz result would be somewhat closer to the true exposure, although the actual mass difference would be small. Most kinds of kaolinite can be used for laboratory calibration, but preferably, the size fraction should be the same as the coal dust being collected.

  7. Blinded versus unblinded estimation of a correlation coefficient to inform interim design adaptations.

    PubMed

    Kunz, Cornelia U; Stallard, Nigel; Parsons, Nicholas; Todd, Susan; Friede, Tim

    2017-03-01

    Regulatory authorities require that the sample size of a confirmatory trial is calculated prior to the start of the trial. However, the sample size quite often depends on parameters that might not be known in advance of the study. Misspecification of these parameters can lead to under- or overestimation of the sample size. Both situations are unfavourable as the first one decreases the power and the latter one leads to a waste of resources. Hence, designs have been suggested that allow a re-assessment of the sample size in an ongoing trial. These methods usually focus on estimating the variance. However, for some methods the performance depends not only on the variance but also on the correlation between measurements. We develop and compare different methods for blinded estimation of the correlation coefficient that are less likely to introduce operational bias when the blinding is maintained. Their performance with respect to bias and standard error is compared to the unblinded estimator. We simulated two different settings: one assuming that all group means are the same and one assuming that different groups have different means. Simulation results show that the naïve (one-sample) estimator is only slightly biased and has a standard error comparable to that of the unblinded estimator. However, if the group means differ, other estimators have better performance depending on the sample size per group and the number of groups. © 2016 The Authors. Biometrical Journal Published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Blinded versus unblinded estimation of a correlation coefficient to inform interim design adaptations

    PubMed Central

    Stallard, Nigel; Parsons, Nicholas; Todd, Susan; Friede, Tim

    2016-01-01

    Regulatory authorities require that the sample size of a confirmatory trial is calculated prior to the start of the trial. However, the sample size quite often depends on parameters that might not be known in advance of the study. Misspecification of these parameters can lead to under‐ or overestimation of the sample size. Both situations are unfavourable as the first one decreases the power and the latter one leads to a waste of resources. Hence, designs have been suggested that allow a re‐assessment of the sample size in an ongoing trial. These methods usually focus on estimating the variance. However, for some methods the performance depends not only on the variance but also on the correlation between measurements. We develop and compare different methods for blinded estimation of the correlation coefficient that are less likely to introduce operational bias when the blinding is maintained. Their performance with respect to bias and standard error is compared to the unblinded estimator. We simulated two different settings: one assuming that all group means are the same and one assuming that different groups have different means. Simulation results show that the naïve (one‐sample) estimator is only slightly biased and has a standard error comparable to that of the unblinded estimator. However, if the group means differ, other estimators have better performance depending on the sample size per group and the number of groups. PMID:27886393

  9. Fossil shrews from Honduras and their significance for late glacial evolution in body size (Mammalia: Soricidae: Cryptotis)

    USGS Publications Warehouse

    Woodman, N.; Croft, D.A.

    2005-01-01

    Our study of mammalian remains excavated in the 1940s from McGrew Cave, north of Copan, Honduras, yielded an assemblage of 29 taxa that probably accumulated predominantly as the result of predation by owls. Among the taxa present are three species of small-eared shrews, genus Cryptotis. One species, Cryptotis merriami, is relatively rare among the fossil remains. The other two shrews, Cryptotis goodwini and Cryptotis orophila, are abundant and exhibit morpho metrical variation distinguishing them from modern populations. Fossils of C. goodwini are distinctly and consistently smaller than modern members of the species. To quantify the size differences, we derived common measures of body size for fossil C. goodwini using regression models based on modern samples of shrews in the Cryptotis mexicana-group. Estimated mean length of head and body for the fossil sample is 72-79 mm, and estimated mean mass is 7.6-9.6 g. These numbers indicate that the fossil sample averaged 6-14% smaller in head and body length and 39-52% less in mass than the modern sample and that increases of 6-17% in head and body length and 65-108% in mass occurred to achieve the mean body size of the modern sample. Conservative estimates of fresh (wet) food intake based on mass indicate that such a size increase would require a 37-58% increase in daily food consumption. In contrast to C. goodwini, fossil C. orophila from the cave is not different in mean body size from modern samples. The fossil sample does, however, show slightly greater variation in size than is currently present throughout the modern geographical distribution of the taxon. Moreover, variation in some other dental and mandibular characters is more constrained, exhibiting a more direct relationship to overall size. Our study of these species indicates that North American shrews have not all been static in size through time, as suggested by some previous work with fossil soricids. Lack of stratigraphic control within the site and our failure to obtain reliable radiometric dates on remains restrict our opportunities to place the site in a firm temporal context. However, the morphometrical differences we document for fossil C. orophila and C. goodwini show them to be distinct from modern populations of these shrews. Some other species of fossil mammals from McGrew Cave exhibit distinct size changes of the magnitudes experienced by many northern North American and some Mexican mammals during the transition from late glacial to Holocene environmental conditions, and it is likely that at least some of the remains from the cave are late Pleistocene in age. One curious factor is that, whereas most mainland mammals that exhibit large-scale size shifts during the late glacial/postglacial transition experienced dwarfing, C. goodwini increased in size. The lack of clinal variation in modern C. goodwini supports the hypothesis that size evolution can result from local selection rather than from cline translocation. Models of size change in mammals indicate that increased size, such as that observed for C. goodwini, are a likely consequence of increased availability of resources and, thereby, a relaxation of selection during critical times of the year.

  10. Elastic moduli in nano-size samples of amorphous solids: System size dependence

    NASA Astrophysics Data System (ADS)

    Cohen, Yossi; Procaccia, Itamar

    2012-08-01

    This letter is motivated by some recent experiments on pan-cake-shaped nano-samples of metallic glass that indicate a decline in the measured shear modulus upon decreasing the sample radius. Similar measurements on crystalline samples of the same dimensions showed a much more modest change. In this letter we offer a theory of this phenomenon; we argue that such results are generically expected for any amorphous solid, with the main effect being related to the increased contribution of surfaces with respect to the bulk when the samples get smaller. We employ exact relations between the shear modulus and the eigenvalues of the system's Hessian matrix to explore the role of surface modes in affecting the elastic moduli.

  11. Repeated significance tests of linear combinations of sensitivity and specificity of a diagnostic biomarker

    PubMed Central

    Wu, Mixia; Shu, Yu; Li, Zhaohai; Liu, Aiyi

    2016-01-01

    A sequential design is proposed to test whether the accuracy of a binary diagnostic biomarker meets the minimal level of acceptance. The accuracy of a binary diagnostic biomarker is a linear combination of the marker’s sensitivity and specificity. The objective of the sequential method is to minimize the maximum expected sample size under the null hypothesis that the marker’s accuracy is below the minimal level of acceptance. The exact results of two-stage designs based on Youden’s index and efficiency indicate that the maximum expected sample sizes are smaller than the sample sizes of the fixed designs. Exact methods are also developed for estimation, confidence interval and p-value concerning the proposed accuracy index upon termination of the sequential testing. PMID:26947768

  12. QESA: Quarantine Extraterrestrial Sample Analysis Methodology

    NASA Astrophysics Data System (ADS)

    Simionovici, A.; Lemelle, L.; Beck, P.; Fihman, F.; Tucoulou, R.; Kiryukhina, K.; Courtade, F.; Viso, M.

    2018-04-01

    Our nondestructive, nm-sized, hyperspectral analysis methodology of combined X-rays/Raman/IR probes in BSL4 quarantine, renders our patented mini-sample holder ideal for detecting extraterrestrial life. Our Stardust and Archean results validate it.

  13. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

    PubMed Central

    Adnan, Tassha Hilda

    2016-01-01

    Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

  14. Annual cycle of size-resolved organic aerosol characterization in an urbanized desert environment

    NASA Astrophysics Data System (ADS)

    Cahill, Thomas M.

    2013-06-01

    Studies of size-resolved organic speciation of aerosols are still relatively rare and are generally only conducted over short durations. However, size-resolved organic data can both suggest possible sources of the aerosols and identify the human exposure to the chemicals since different aerosol sizes have different lung capture efficiencies. The objective of this study was to conduct size-resolved organic aerosol speciation for a calendar year in Phoenix, Arizona to determine the seasonal variations in both chemical concentrations and size profiles. The results showed large seasonal differences in combustion pollutants where the highest concentrations were observed in winter. Summertime aerosols have a greater proportion of biological compounds (e.g. sugars and fatty acids) and the biological compounds represent the largest fraction of the organic compounds detected. These results suggest that standard organic carbon (OC) measurements might be heavily influenced by primary biological compounds particularly if the samples are PM10 and TSP samples. Several large dust storms did not significantly alter the organic aerosol profile since Phoenix resides in a dusty desert environment, so the soil and plant tracer of trehalose was almost always present. The aerosol size profiles showed that PAHs were generally most abundant in the smallest aerosol size fractions, which are most likely to be captured by the lung, while the biological compounds were almost exclusively found in the coarse size fraction.

  15. Estimating the Size of a Large Network and its Communities from a Random Sample

    PubMed Central

    Chen, Lin; Karbasi, Amin; Crawford, Forrest W.

    2017-01-01

    Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V, E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W ⊆ V and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that accurately estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhaustive set of experiments to study the effects of sample size, K, and SBM model parameters on the accuracy of the estimates. The experimental results also demonstrate that PULSE significantly outperforms a widely-used method called the network scale-up estimator in a wide variety of scenarios. PMID:28867924

  16. Estimating the Size of a Large Network and its Communities from a Random Sample.

    PubMed

    Chen, Lin; Karbasi, Amin; Crawford, Forrest W

    2016-01-01

    Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = ( V, E ) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W ⊆ V and letting G ( W ) be the induced subgraph in G of the vertices in W . In addition to G ( W ), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that accurately estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhaustive set of experiments to study the effects of sample size, K , and SBM model parameters on the accuracy of the estimates. The experimental results also demonstrate that PULSE significantly outperforms a widely-used method called the network scale-up estimator in a wide variety of scenarios.

  17. Gear and seasonal bias associated with abundance and size structure estimates for lentic freshwater fishes

    USGS Publications Warehouse

    Fischer, Jesse R.; Quist, Michael C.

    2014-01-01

    All freshwater fish sampling methods are biased toward particular species, sizes, and sexes and are further influenced by season, habitat, and fish behavior changes over time. However, little is known about gear-specific biases for many common fish species because few multiple-gear comparison studies exist that have incorporated seasonal dynamics. We sampled six lakes and impoundments representing a diversity of trophic and physical conditions in Iowa, USA, using multiple gear types (i.e., standard modified fyke net, mini-modified fyke net, sinking experimental gill net, bag seine, benthic trawl, boat-mounted electrofisher used diurnally and nocturnally) to determine the influence of sampling methodology and season on fisheries assessments. Specifically, we describe the influence of season on catch per unit effort, proportional size distribution, and the number of samples required to obtain 125 stock-length individuals for 12 species of recreational and ecological importance. Mean catch per unit effort generally peaked in the spring and fall as a result of increased sampling effectiveness in shallow areas and seasonal changes in habitat use (e.g., movement offshore during summer). Mean proportional size distribution decreased from spring to fall for white bass Morone chrysops, largemouth bass Micropterus salmoides, bluegill Lepomis macrochirus, and black crappie Pomoxis nigromaculatus, suggesting selectivity for large and presumably sexually mature individuals in the spring and summer. Overall, the mean number of samples required to sample 125 stock-length individuals was minimized in the fall with sinking experimental gill nets, a boat-mounted electrofisher used at night, and standard modified nets for 11 of the 12 species evaluated. Our results provide fisheries scientists with relative comparisons between several recommended standard sampling methods and illustrate the effects of seasonal variation on estimates of population indices that will be critical to the future development of standardized sampling methods for freshwater fish in lentic ecosystems.

  18. Vapor-phase photo-oxidation of methanol over nanosize titanium dioxide clusters dispersed in MCM-41 host material part 1: synthesis and characterization.

    PubMed

    Bhattacharya, K; Tripathi, A K; Dey, G K; Gupta, N M

    2005-05-01

    Nanosize clusters of titania were dispersed in mesoporous MCM-41 silica matrix with the help of the incipient wet-impregnation route, using an isopropanol solution of titanium isopropoxide as precursor. The clusters thus formed were of pure anatase phase and their size depended upon the titania loading. In the case of low (< 15 wt %) loadings, the TiO2 particles were X-ray and laser-Raman amorphous, confirming very high dispersion. These particles were mostly of < or = 2 nm size. On the other hand, larger size clusters (2-15 nm) were present in a sample with a higher loading of approximately 21 wt %. These particles of titania, irrespective of their size, exhibited an absorbance behavior similar to that of bulk TiO2. Powder X-ray diffraction, N2-adsorption and transmission electron microscopy results showed that while smaller size particles were confined mostly inside the pore system, the larger size particles occupied the external surface of the host matrix. At the same time, the structural integrity of the host was maintained even though some deformation in the pore system was noticed in the case of the sample having highest loading. The core level X-ray photoelectron spectroscopy results revealed a + 4 valence state of Ti in all the samples. A positive binding energy shift and the increase of the width of Ti 2p peaks were observed, however, with the decrease in the particle size of supported titania crystallites, indicative of a microenvironment for surface sites that is different from that of the bulk.

  19. Variation in aluminum, iron, and particle concentrations in oxic ground-water samples collected by use of tangential-flow ultrafiltration with low-flow sampling

    USGS Publications Warehouse

    Szabo, Z.; Oden, J.H.; Gibs, J.; Rice, D.E.; Ding, Y.; ,

    2001-01-01

    Particulates that move with ground water and those that are artificially mobilized during well purging could be incorporated into water samples during collection and could cause trace-element concentrations to vary in unfiltered samples, and possibly in filtered samples (typically 0.45-um (micron) pore size) as well, depending on the particle-size fractions present. Therefore, measured concentrations may not be representative of those in the aquifer. Ground water may contain particles of various sizes and shapes that are broadly classified as colloids, which do not settle from water, and particulates, which do. In order to investigate variations in trace-element concentrations in ground-water samples as a function of particle concentrations and particle-size fractions, the U.S. Geological Survey, in cooperation with the U.S. Air Force, collected samples from five wells completed in the unconfined, oxic Kirkwood-Cohansey aquifer system of the New Jersey Coastal Plain. Samples were collected by purging with a portable pump at low flow (0.2-0.5 liters per minute and minimal drawdown, ideally less than 0.5 foot). Unfiltered samples were collected in the following sequence: (1) within the first few minutes of pumping, (2) after initial turbidity declined and about one to two casing volumes of water had been purged, and (3) after turbidity values had stabilized at less than 1 to 5 Nephelometric Turbidity Units. Filtered samples were split concurrently through (1) a 0.45-um pore size capsule filter, (2) a 0.45-um pore size capsule filter and a 0.0029-um pore size tangential-flow filter in sequence, and (3), in selected cases, a 0.45-um and a 0.05-um pore size capsule filter in sequence. Filtered samples were collected concurrently with the unfiltered sample that was collected when turbidity values stabilized. Quality-assurance samples consisted of sequential duplicates (about 25 percent) and equipment blanks. Concentrations of particles were determined by light scattering. Variations in concentrations aluminum and iron (1 -74 and 1-199 ug/L (micrograms per liter), respectively), common indicators of the presence of particulate-borne trace elements, were greatest in sample sets from individual wells with the greatest variations in turbidity and particle concentration. Differences in trace-element concentrations in sequentially collected unfiltered samples with variable turbidity were 5 to 10 times as great as those in concurrently collected samples that were passed through various filters. These results indicate that turbidity must be both reduced and stabilized even when low-flow sample-collection techniques are used in order to obtain water samples that do not contain considerable particulate artifacts. Currently (2001) available techniques need to be refined to ensure that the measured trace-element concentrations are representative of those that are mobile in the aquifer water.

  20. Impact of crystalline defects and size on X-ray line broadening: A phenomenological approach for tetragonal SnO{sub 2} nanocrystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muhammed Shafi, P.; Chandra Bose, A., E-mail: acbose@nitt.edu

    2015-05-15

    Nanocrystalline tin oxide (SnO{sub 2}) powders with different grain size were prepared by chemical precipitation method. The reaction was carried out by varying the period of hydrolysis and the as-prepared samples were annealed at different temperatures. The samples were characterized using X-ray powder diffractometer and transmission electron microscopy. The microstrain and crystallite size were calculated for all the samples by using Williamson-Hall (W-H) models namely, isotropic strain model (ISM), anisotropic strain model (ASM) and uniform deformation energy density model (UDEDM). The morphology and particle size were determined using TEM micrographs. The directional dependant young’s modulus was modified as an equationmore » relating elastic compliances (s{sub ij}) and Miller indices of the lattice plane (hkl) for tetragonal crystal system and also the equation for elastic compliance in terms of stiffness constants was derived. The changes in crystallite size and microstrain due to lattice defects were observed while varying the hydrolysis time and the annealing temperature. The dependence of crystallite size on lattice strain was studied. The results were correlated with the available studies on electrical properties using impedance spectroscopy.« less

  1. Explanation of Two Anomalous Results in Statistical Mediation Analysis.

    PubMed

    Fritz, Matthew S; Taylor, Aaron B; Mackinnon, David P

    2012-01-01

    Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special concern as the bias-corrected bootstrap is often recommended and used due to its higher statistical power compared with other tests. The second result is statistical power reaching an asymptote far below 1.0 and in some conditions even declining slightly as the size of the relationship between X and M , a , increased. Two computer simulations were conducted to examine these findings in greater detail. Results from the first simulation found that the increased Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap are a function of an interaction between the size of the individual paths making up the mediated effect and the sample size, such that elevated Type I error rates occur when the sample size is small and the effect size of the nonzero path is medium or larger. Results from the second simulation found that stagnation and decreases in statistical power as a function of the effect size of the a path occurred primarily when the path between M and Y , b , was small. Two empirical mediation examples are provided using data from a steroid prevention and health promotion program aimed at high school football players (Athletes Training and Learning to Avoid Steroids; Goldberg et al., 1996), one to illustrate a possible Type I error for the bias-corrected bootstrap test and a second to illustrate a loss in power related to the size of a . Implications of these findings are discussed.

  2. Sediment laboratory quality-assurance project: studies of methods and materials

    USGS Publications Warehouse

    Gordon, J.D.; Newland, C.A.; Gray, J.R.

    2001-01-01

    In August 1996 the U.S. Geological Survey initiated the Sediment Laboratory Quality-Assurance project. The Sediment Laboratory Quality Assurance project is part of the National Sediment Laboratory Quality-Assurance program. This paper addresses the fmdings of the sand/fme separation analysis completed for the single-blind reference sediment-sample project and differences in reported results between two different analytical procedures. From the results it is evident that an incomplete separation of fme- and sand-size material commonly occurs resulting in the classification of some of the fme-size material as sand-size material. Electron microscopy analysis supported the hypothesis that the negative bias for fme-size material and the positive bias for sand-size material is largely due to aggregation of some of the fine-size material into sand-size particles and adherence of fine-size material to the sand-size grains. Electron microscopy analysis showed that preserved river water, which was low in dissolved solids, specific conductance, and neutral pH, showed less aggregation and adhesion than preserved river water that was higher in dissolved solids and specific conductance with a basic pH. Bacteria were also found growing in the matrix, which may enhance fme-size material aggregation through their adhesive properties. Differences between sediment-analysis methods were also investigated as pan of this study. Suspended-sediment concentration results obtained from one participating laboratory that used a total-suspended solids (TSS) method had greater variability and larger negative biases than results obtained when this laboratory used a suspended-sediment concentration method. When TSS methods were used to analyze the reference samples, the median suspended sediment concentration percent difference was -18.04 percent. When the laboratory used a suspended-sediment concentration method, the median suspended-sediment concentration percent difference was -2.74 percent. The percent difference was calculated as follows: Percent difference = (( reported mass - known mass)/known mass ) X 100.

  3. Characterisation of Fine Ash Fractions from the AD 1314 Kaharoa Eruption

    NASA Astrophysics Data System (ADS)

    Weaver, S. J.; Rust, A.; Carey, R. J.; Houghton, B. F.

    2012-12-01

    The AD 1314±12 yr Kaharoa eruption of Tarawera volcano, New Zealand, produced deposits exhibiting both plinian and subplinian characteristics (Nairn et al., 2001; 2004, Leonard et al., 2002, Hogg et al., 2003). Their widespread dispersal yielded volumes, column heights, and mass discharge rates of plinian magnitude and intensity (Sahetapy-Engel, 2002); however, vertical shifts in grain size suggest waxing and waning within single phases and time-breaks on the order of hours between phases. These grain size shifts were quantified using sieve, laser diffraction, and image analysis of the fine ash fractions (<1 mm in diameter) of some of the most explosive phases of the eruption. These analyses served two purposes: 1) to characterise the change in eruption intensity over time, and 2) to compare the three methods of grain size analysis. Additional analyses of the proportions of components and particle shape were also conducted to aid in the interpretation of the eruption and transport dynamics. 110 samples from a single location about 6 km from source were sieved at half phi intervals between -4φ to 4φ (16 mm - 63 μm). A single sample was then chosen to test the range of grain sizes to run through the Mastersizer 2000. Three aliquots were tested; the first consisted of each sieve size fraction ranging between 0φ (1000 μm) and <4φ (<63 μm, i.e. the pan). For example, 0, 0.5, 1, …, 4φ, and the pan were ran through the Mastersizer and then their results, weighted according to their sieve weight percents, were summed together to produce a total distribution. The second aliquot included 3 samples ranging between 0-2φ (1000-250 μm), 2.5-4φ (249-63 μm), and the pan. A single sample consisting of the total range of grain sizes between 0φ and the pan was used for the final aliquot. Their results were compared and it was determined that the single sample consisting of the broadest range of grain sizes yielded an accurate grain size distribution. This data was then compared with the sieve weight percent data, and revealed that there is a significant difference in size characterisation between sieving and the Mastersizer for size fractions between 0-3φ (1000-125 μm). This is due predominantly to the differing methods that sieving and the Mastersizer use to characterise a single particle, to inhomogeneity in grain density in each grain-size fraction, and to grain-shape irregularities. This led the Mastersizer to allocate grains from a certain sieve size fraction into coarser size fractions. Therefore, only the Mastersizer data from 3.5φ and below were combined with the coarser sieve data to yield total grain size distributions. This high-resolution analysis of the grain size data enabled subtle trends in grain size to be identified and related to short timescale eruptive processes.

  4. Size-segregated aerosol in a hot-spot pollution urban area: Chemical composition and three-way source apportionment.

    PubMed

    Bernardoni, V; Elser, M; Valli, G; Valentini, S; Bigi, A; Fermo, P; Piazzalunga, A; Vecchi, R

    2017-12-01

    In this work, a comprehensive characterisation and source apportionment of size-segregated aerosol collected using a multistage cascade impactor was performed. The samples were collected during wintertime in Milan (Italy), which is located in the Po Valley, one of the main pollution hot-spot areas in Europe. For every sampling, size-segregated mass concentration, elemental and ionic composition, and levoglucosan concentration were determined. Size-segregated data were inverted using the program MICRON to identify and quantify modal contributions of all the measured components. The detailed chemical characterisation allowed the application of a three-way (3-D) receptor model (implemented using Multilinear Engine) for size-segregated source apportionment and chemical profiles identification. It is noteworthy that - as far as we know - this is the first time that three-way source apportionment is attempted using data of aerosol collected by traditional cascade impactors. Seven factors were identified: wood burning, industry, resuspended dust, regional aerosol, construction works, traffic 1, and traffic 2. Further insights into size-segregated factor profiles suggested that the traffic 1 factor can be associated to diesel vehicles and traffic 2 to gasoline vehicles. The regional aerosol factor resulted to be the main contributor (nearly 50%) to the droplet mode (accumulation sub-mode with modal diameter in the range 0.5-1 μm), whereas the overall contribution from the two factors related to traffic was the most important one in the other size modes (34-41%). The results showed that applying a 3-D receptor model to size-segregated samples allows identifying factors of local and regional origin while receptor modelling on integrated PM fractions usually singles out factors characterised by primary (e.g. industry, traffic, soil dust) and secondary (e.g. ammonium sulphate and nitrate) origin. Furthermore, the results suggested that the information on size-segregated chemical composition in different size classes was exploited by the model to relate primary emissions to rapidly-formed secondary compounds. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Unfolding sphere size distributions with a density estimator based on Tikhonov regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weese, J.; Korat, E.; Maier, D.

    1997-12-01

    This report proposes a method for unfolding sphere size distributions given a sample of radii that combines the advantages of a density estimator with those of Tikhonov regularization methods. The following topics are discusses in this report to achieve this method: the relation between the profile and the sphere size distribution; the method for unfolding sphere size distributions; the results based on simulations; and the experimental data comparison.

  6. Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use

    USGS Publications Warehouse

    Arthur, Steve M.; Schwartz, Charles C.

    1999-01-01

    We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area <1%/additional location) and precise (CV < 50%). Although the radiotracking data appeared unbiased, except for the relationship between area and sample size, these data failed to indicate some areas that likely were important to bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.

  7. Accounting for treatment by center interaction in sample size determinations and the use of surrogate outcomes in the pessary for the prevention of preterm birth trial: a simulation study.

    PubMed

    Willan, Andrew R

    2016-07-05

    The Pessary for the Prevention of Preterm Birth Study (PS3) is an international, multicenter, randomized clinical trial designed to examine the effectiveness of the Arabin pessary in preventing preterm birth in pregnant women with a short cervix. During the design of the study two methodological issues regarding power and sample size were raised. Since treatment in the Standard Arm will vary between centers, it is anticipated that so too will the probability of preterm birth in that arm. This will likely result in a treatment by center interaction, and the issue of how this will affect the sample size requirements was raised. The sample size requirements to examine the effect of the pessary on the baby's clinical outcome was prohibitively high, so the second issue is how best to examine the effect on clinical outcome. The approaches taken to address these issues are presented. Simulation and sensitivity analysis were used to address the sample size issue. The probability of preterm birth in the Standard Arm was assumed to vary between centers following a Beta distribution with a mean of 0.3 and a coefficient of variation of 0.3. To address the second issue a Bayesian decision model is proposed that combines the information regarding the between-treatment difference in the probability of preterm birth from PS3 with the data from the Multiple Courses of Antenatal Corticosteroids for Preterm Birth Study that relate preterm birth and perinatal mortality/morbidity. The approach provides a between-treatment comparison with respect to the probability of a bad clinical outcome. The performance of the approach was assessed using simulation and sensitivity analysis. Accounting for a possible treatment by center interaction increased the sample size from 540 to 700 patients per arm for the base case. The sample size requirements increase with the coefficient of variation and decrease with the number of centers. Under the same assumptions used for determining the sample size requirements, the simulated mean probability that pessary reduces the risk of perinatal mortality/morbidity is 0.98. The simulated mean decreased with coefficient of variation and increased with the number of clinical sites. Employing simulation and sensitivity analysis is a useful approach for determining sample size requirements while accounting for the additional uncertainty due to a treatment by center interaction. Using a surrogate outcome in conjunction with a Bayesian decision model is an efficient way to compare important clinical outcomes in a randomized clinical trial in situations where the direct approach requires a prohibitively high sample size.

  8. Samples in applied psychology: over a decade of research in review.

    PubMed

    Shen, Winny; Kiger, Thomas B; Davies, Stacy E; Rasch, Rena L; Simon, Kara M; Ones, Deniz S

    2011-09-01

    This study examines sample characteristics of articles published in Journal of Applied Psychology (JAP) from 1995 to 2008. At the individual level, the overall median sample size over the period examined was approximately 173, which is generally adequate for detecting the average magnitude of effects of primary interest to researchers who publish in JAP. Samples using higher units of analyses (e.g., teams, departments/work units, and organizations) had lower median sample sizes (Mdn ≈ 65), yet were arguably robust given typical multilevel design choices of JAP authors despite the practical constraints of collecting data at higher units of analysis. A substantial proportion of studies used student samples (~40%); surprisingly, median sample sizes for student samples were smaller than working adult samples. Samples were more commonly occupationally homogeneous (~70%) than occupationally heterogeneous. U.S. and English-speaking participants made up the vast majority of samples, whereas Middle Eastern, African, and Latin American samples were largely unrepresented. On the basis of study results, recommendations are provided for authors, editors, and readers, which converge on 3 themes: (a) appropriateness and match between sample characteristics and research questions, (b) careful consideration of statistical power, and (c) the increased popularity of quantitative synthesis. Implications are discussed in terms of theory building, generalizability of research findings, and statistical power to detect effects. PsycINFO Database Record (c) 2011 APA, all rights reserved

  9. Effect of magnetic anisotropy and particle size distribution on temperature dependent magnetic hyperthermia in Fe3O4 ferrofluids

    NASA Astrophysics Data System (ADS)

    Palihawadana Arachchige, Maheshika; Nemala, Humeshkar; Naik, Vaman; Naik, Ratna

    Magnetic hyperthermia (MHT) has a great potential as a non-invasive cancer therapy technique. Specific absorption rate (SAR) which measures the efficiency of heat generation, mainly depends on magnetic properties of nanoparticles such as saturation magnetization (Ms) and magnetic anisotropy (K) which depend on the size and shape. Therefore, MHT applications of magnetic nanoparticles often require a controllable synthesis to achieve desirable magnetic properties. We have synthesized Fe3O4 nanoparticles using two different methods, co-precipitation (CP) and hydrothermal (HT) techniques to produce similar XRD crystallite size of 12 nm, and subsequently coated with dextran to prepare ferrofluids for MHT. However, TEM measurements show average particle sizes of 13.8 +/-3.6 nm and 14.6 +/-3.6 nm for HT and CP samples, implying the existence of an amorphous surface layer for both. The MHT data show the two samples have very different SAR values of 110 W/g (CP) and 40W/g (HT) at room temperature, although they have similar Ms of 70 +/-4 emu/g regardless of their different TEM sizes. We fitted the temperature dependent SAR using linear response theory to explain the observed results. CP sample shows a larger magnetic core with a narrow size distribution and a higher K value compared to that of HT sample.

  10. Effects of Sample Preparation on the Infrared Reflectance Spectra of Powders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brauer, Carolyn S.; Johnson, Timothy J.; Myers, Tanya L.

    2015-05-22

    While reflectance spectroscopy is a useful tool in identifying molecular compounds, laboratory measurement of solid (particularly powder) samples often is confounded by sample preparation methods. For example, both the packing density and surface roughness can have an effect on the quantitative reflectance spectra of powdered samples. Recent efforts in our group have focused on developing standard methods for measuring reflectance spectra that accounts for sample preparation, as well as other factors such as particle size and provenance. In this work, the effect of preparation method on sample reflectivity was investigated by measuring the directional-hemispherical spectra of samples that were hand-packedmore » as well as pressed into pellets using an integrating sphere attached to a Fourier transform infrared spectrometer. The results show that the methods used to prepare the sample have a substantial effect on the measured reflectance spectra, as do other factors such as particle size.« less

  11. Effects of sample preparation on the infrared reflectance spectra of powders

    NASA Astrophysics Data System (ADS)

    Brauer, Carolyn S.; Johnson, Timothy J.; Myers, Tanya L.; Su, Yin-Fong; Blake, Thomas A.; Forland, Brenda M.

    2015-05-01

    While reflectance spectroscopy is a useful tool for identifying molecular compounds, laboratory measurement of solid (particularly powder) samples often is confounded by sample preparation methods. For example, both the packing density and surface roughness can have an effect on the quantitative reflectance spectra of powdered samples. Recent efforts in our group have focused on developing standard methods for measuring reflectance spectra that accounts for sample preparation, as well as other factors such as particle size and provenance. In this work, the effect of preparation method on sample reflectivity was investigated by measuring the directional-hemispherical spectra of samples that were hand-loaded as well as pressed into pellets using an integrating sphere attached to a Fourier transform infrared spectrometer. The results show that the methods used to prepare the sample can have a substantial effect on the measured reflectance spectra, as do other factors such as particle size.

  12. Effects of Al(OH)O nanoparticle agglomerate size in epoxy resin on tension, bending, and fracture properties

    NASA Astrophysics Data System (ADS)

    Jux, Maximilian; Finke, Benedikt; Mahrholz, Thorsten; Sinapius, Michael; Kwade, Arno; Schilde, Carsten

    2017-04-01

    Several epoxy Al(OH)O (boehmite) dispersions in an epoxy resin are produced in a kneader to study the mechanistic correlation between the nanoparticle size and mechanical properties of the prepared nanocomposites. The agglomerate size is set by a targeted variation in solid content and temperature during dispersion, resulting in a different level of stress intensity and thus a different final agglomerate size during the process. The suspension viscosity was used for the estimation of stress energy in laminar shear flow. Agglomerate size measurements are executed via dynamic light scattering to ensure the quality of the produced dispersions. Furthermore, various nanocomposite samples are prepared for three-point bending, tension, and fracture toughness tests. The screening of the size effect is executed with at least seven samples per agglomerate size and test method. The variation of solid content is found to be a reliable method to adjust the agglomerate size between 138-354 nm during dispersion. The size effect on the Young's modulus and the critical stress intensity is only marginal. Nevertheless, there is a statistically relevant trend showing a linear increase with a decrease in agglomerate size. In contrast, the size effect is more dominant to the sample's strain and stress at failure. Unlike microscaled agglomerates or particles, which lead to embrittlement of the composite material, nanoscaled agglomerates or particles cause the composite elongation to be nearly of the same level as the base material. The observed effect is valid for agglomerate sizes between 138-354 nm and a particle mass fraction of 10 wt%.

  13. Estimation after classification using lot quality assurance sampling: corrections for curtailed sampling with application to evaluating polio vaccination campaigns.

    PubMed

    Olives, Casey; Valadez, Joseph J; Pagano, Marcello

    2014-03-01

    To assess the bias incurred when curtailment of Lot Quality Assurance Sampling (LQAS) is ignored, to present unbiased estimators, to consider the impact of cluster sampling by simulation and to apply our method to published polio immunization data from Nigeria. We present estimators of coverage when using two kinds of curtailed LQAS strategies: semicurtailed and curtailed. We study the proposed estimators with independent and clustered data using three field-tested LQAS designs for assessing polio vaccination coverage, with samples of size 60 and decision rules of 9, 21 and 33, and compare them to biased maximum likelihood estimators. Lastly, we present estimates of polio vaccination coverage from previously published data in 20 local government authorities (LGAs) from five Nigerian states. Simulations illustrate substantial bias if one ignores the curtailed sampling design. Proposed estimators show no bias. Clustering does not affect the bias of these estimators. Across simulations, standard errors show signs of inflation as clustering increases. Neither sampling strategy nor LQAS design influences estimates of polio vaccination coverage in 20 Nigerian LGAs. When coverage is low, semicurtailed LQAS strategies considerably reduces the sample size required to make a decision. Curtailed LQAS designs further reduce the sample size when coverage is high. Results presented dispel the misconception that curtailed LQAS data are unsuitable for estimation. These findings augment the utility of LQAS as a tool for monitoring vaccination efforts by demonstrating that unbiased estimation using curtailed designs is not only possible but these designs also reduce the sample size. © 2014 John Wiley & Sons Ltd.

  14. The Impacts of Family Size on Investment in Child Quality

    ERIC Educational Resources Information Center

    Caceres-Delpiano, Julio

    2006-01-01

    Using multiple births as an exogenous shift in family size, I investigate the impact of the number of children on child investment and child well-being. Using data from the 1980 US Census Five-Percent Public Use Micro Sample, 2SLS results demonstrate that parents facing a change in family size reallocate resources in a way consistent with Becker's…

  15. Connecting Clump Sizes in Turbulent Disk Galaxies to Instability Theory

    NASA Astrophysics Data System (ADS)

    Fisher, David B.; Glazebrook, Karl; Abraham, Roberto G.; Damjanov, Ivana; White, Heidi A.; Obreschkow, Danail; Basset, Robert; Bekiaris, Georgios; Wisnioski, Emily; Green, Andy; Bolatto, Alberto D.

    2017-04-01

    In this letter we study the mean sizes of Hα clumps in turbulent disk galaxies relative to kinematics, gas fractions, and Toomre Q. We use ˜100 pc resolution HST images, IFU kinematics, and gas fractions of a sample of rare, nearby turbulent disks with properties closely matched to z˜ 1.5{--}2 main-sequence galaxies (the DYNAMO sample). We find linear correlations of normalized mean clump sizes with both the gas fraction and the velocity dispersion-to-rotation velocity ratio of the host galaxy. We show that these correlations are consistent with predictions derived from a model of instabilities in a self-gravitating disk (the so-called “violent disk instability model”). We also observe, using a two-fluid model for Q, a correlation between the size of clumps and self-gravity-driven unstable regions. These results are most consistent with the hypothesis that massive star-forming clumps in turbulent disks are the result of instabilities in self-gravitating gas-rich disks, and therefore provide a direct connection between resolved clump sizes and this in situ mechanism.

  16. Alpha spectrometric characterization of process-related particle size distributions from active particle sampling at the Los Alamos National Laboratory uranium foundry

    NASA Astrophysics Data System (ADS)

    Plionis, A. A.; Peterson, D. S.; Tandon, L.; LaMont, S. P.

    2010-03-01

    Uranium particles within the respirable size range pose a significant hazard to the health and safety of workers. Significant differences in the deposition and incorporation patterns of aerosols within the respirable range can be identified and integrated into sophisticated health physics models. Data characterizing the uranium particle size distribution resulting from specific foundry-related processes are needed. Using personal air sampling cascade impactors, particles collected from several foundry processes were sorted by activity median aerodynamic diameter onto various Marple substrates. After an initial gravimetric assessment of each impactor stage, the substrates were analyzed by alpha spectrometry to determine the uranium content of each stage. Alpha spectrometry provides rapid non-distructive isotopic data that can distinguish process uranium from natural sources and the degree of uranium contribution to the total accumulated particle load. In addition, the particle size bins utilized by the impactors provide adequate resolution to determine if a process particle size distribution is: lognormal, bimodal, or trimodal. Data on process uranium particle size values and distributions facilitate the development of more sophisticated and accurate models for internal dosimetry, resulting in an improved understanding of foundry worker health and safety.

  17. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    PubMed

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. “Nanofiltration” Enabled by Super-Absorbent Polymer Beads for Concentrating Microorganisms in Water Samples

    NASA Astrophysics Data System (ADS)

    Xie, Xing; Bahnemann, Janina; Wang, Siwen; Yang, Yang; Hoffmann, Michael R.

    2016-02-01

    Detection and quantification of pathogens in water is critical for the protection of human health and for drinking water safety and security. When the pathogen concentrations are low, large sample volumes (several liters) are needed to achieve reliable quantitative results. However, most microbial identification methods utilize relatively small sample volumes. As a consequence, a concentration step is often required to detect pathogens in natural waters. Herein, we introduce a novel water sample concentration method based on superabsorbent polymer (SAP) beads. When SAP beads swell with water, small molecules can be sorbed within the beads, but larger particles are excluded and, thus, concentrated in the residual non-sorbed water. To illustrate this approach, millimeter-sized poly(acrylamide-co-itaconic acid) (P(AM-co-IA)) beads are synthesized and successfully applied to concentrate water samples containing two model microorganisms: Escherichia coli and bacteriophage MS2. Experimental results indicate that the size of the water channel within water swollen P(AM-co-IA) hydrogel beads is on the order of several nanometers. The millimeter size coupled with a negative surface charge of the beads are shown to be critical in order to achieve high levels of concentration. This new concentration procedure is very fast, effective, scalable, and low-cost with no need for complex instrumentation.

  19. "Nanofiltration" Enabled by Super-Absorbent Polymer Beads for Concentrating Microorganisms in Water Samples.

    PubMed

    Xie, Xing; Bahnemann, Janina; Wang, Siwen; Yang, Yang; Hoffmann, Michael R

    2016-02-15

    Detection and quantification of pathogens in water is critical for the protection of human health and for drinking water safety and security. When the pathogen concentrations are low, large sample volumes (several liters) are needed to achieve reliable quantitative results. However, most microbial identification methods utilize relatively small sample volumes. As a consequence, a concentration step is often required to detect pathogens in natural waters. Herein, we introduce a novel water sample concentration method based on superabsorbent polymer (SAP) beads. When SAP beads swell with water, small molecules can be sorbed within the beads, but larger particles are excluded and, thus, concentrated in the residual non-sorbed water. To illustrate this approach, millimeter-sized poly(acrylamide-co-itaconic acid) (P(AM-co-IA)) beads are synthesized and successfully applied to concentrate water samples containing two model microorganisms: Escherichia coli and bacteriophage MS2. Experimental results indicate that the size of the water channel within water swollen P(AM-co-IA) hydrogel beads is on the order of several nanometers. The millimeter size coupled with a negative surface charge of the beads are shown to be critical in order to achieve high levels of concentration. This new concentration procedure is very fast, effective, scalable, and low-cost with no need for complex instrumentation.

  20. “Nanofiltration” Enabled by Super-Absorbent Polymer Beads for Concentrating Microorganisms in Water Samples

    PubMed Central

    Xie, Xing; Bahnemann, Janina; Wang, Siwen; Yang, Yang; Hoffmann, Michael R.

    2016-01-01

    Detection and quantification of pathogens in water is critical for the protection of human health and for drinking water safety and security. When the pathogen concentrations are low, large sample volumes (several liters) are needed to achieve reliable quantitative results. However, most microbial identification methods utilize relatively small sample volumes. As a consequence, a concentration step is often required to detect pathogens in natural waters. Herein, we introduce a novel water sample concentration method based on superabsorbent polymer (SAP) beads. When SAP beads swell with water, small molecules can be sorbed within the beads, but larger particles are excluded and, thus, concentrated in the residual non-sorbed water. To illustrate this approach, millimeter-sized poly(acrylamide-co-itaconic acid) (P(AM-co-IA)) beads are synthesized and successfully applied to concentrate water samples containing two model microorganisms: Escherichia coli and bacteriophage MS2. Experimental results indicate that the size of the water channel within water swollen P(AM-co-IA) hydrogel beads is on the order of several nanometers. The millimeter size coupled with a negative surface charge of the beads are shown to be critical in order to achieve high levels of concentration. This new concentration procedure is very fast, effective, scalable, and low-cost with no need for complex instrumentation. PMID:26876979

  1. Surface degassing and modifications to vesicle size distributions in active basalt flows

    USGS Publications Warehouse

    Cashman, K.V.; Mangan, M.T.; Newman, S.

    1994-01-01

    The character of the vesicle population in lava flows includes several measurable parameters that may provide important constraints on lava flow dynamics and rheology. Interpretation of vesicle size distributions (VSDs), however, requires an understanding of vesiculation processes in feeder conduits, and of post-eruption modifications to VSDs during transport and emplacement. To this end we collected samples from active basalt flows at Kilauea Volcano: (1) near the effusive Kupaianaha vent; (2) through skylights in the approximately isothermal Wahaula and Kamoamoa tube systems transporting lava to the coast; (3) from surface breakouts at different locations along the lava tubes; and (4) from different locations in a single breakout from a lava tube 1 km from the 51 vent at Pu'u 'O'o. Near-vent samples are characterized by VSDs that show exponentially decreasing numbers of vesicles with increasing vesicle size. These size distributions suggest that nucleation and growth of bubbles were continuous during ascent in the conduit, with minor associated bubble coalescence resulting from differential bubble rise. The entire vesicle population can be attributed to shallow exsolution of H2O-dominated gases at rates consistent with those predicted by simple diffusion models. Measurements of H2O, CO2 and S in the matrix glass show that the melt equilibrated rapidly at atmospheric pressure. Down-tube samples maintain similar VSD forms but show a progressive decrease in both overall vesicularity and mean vesicle size. We attribute this change to open system, "passive" rise and escape of larger bubbles to the surface. Such gas loss from the tube system results in the output of 1.2 ?? 106 g/day SO2, an output representing an addition of approximately 1% to overall volatile budget calculations. A steady increase in bubble number density with downstream distance is best explained by continued bubble nucleation at rates of 7-8/cm3s. Rates are ???25% of those estimated from the vent samples, and thus represent volatile supersaturations considerably less than those of the conduit. We note also that the small total volume represented by this new bubble population does not: (1) measurably deplete the melt in volatiles; or (2) make up for the overall vesicularity decrease resulting from the loss of larger bubbles. Surface breakout samples have distinctive VSDs characterized by an extreme depletion in the small vesicle population. This results in samples with much lower number densities and larger mean vesicle sizes than corresponding tube samples. Similar VSD patterns have been observed in solidified lava flows and are interpreted to result from either static (wall rupture) or dynamic (bubble rise and capture) coalescence. Through comparison with vent and tube vesicle populations, we suggest that, in addition to coalescence, the observed vesicle populations in the breakout samples have experienced a rapid loss of small vesicles consistent with 'ripening' of the VSD resulting from interbubble diffusion of volatiles. Confinement of ripening features to surface flows suggests that the thin skin that forms on surface breakouts may play a role in the observed VSD modification. ?? 1994.

  2. Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.

    PubMed

    Chen, Xiao; Lu, Bin; Yan, Chao-Gan

    2018-01-01

    Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  3. A comparison of confidence/credible interval methods for the area under the ROC curve for continuous diagnostic tests with small sample size.

    PubMed

    Feng, Dai; Cortese, Giuliana; Baumgartner, Richard

    2017-12-01

    The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.

  4. Galaxy evolution by color-log(n) type since redshift unity in the Hubble Ultra Deep Field

    NASA Astrophysics Data System (ADS)

    Cameron, E.; Driver, S. P.

    2009-01-01

    Aims: We explore the use of the color-log(n) (where n is the global Sérsic index) plane as a tool for subdividing the galaxy population in a physically-motivated manner out to redshift unity. We thereby aim to quantify surface brightness evolution by color-log(n) type, accounting separately for the specific selection and measurement biases against each. Methods: We construct (u-r) color-log(n) diagrams for distant galaxies in the Hubble Ultra Deep Field (UDF) within a series of volume-limited samples to z=1.5. The color-log(n) distributions of these high redshift galaxies are compared against that measured for nearby galaxies in the Millennium Galaxy Catalogue (MGC), as well as to the results of visual morphological classification. Based on this analysis we divide our sample into three color-structure classes. Namely, “red, compact”, “blue, diffuse” and “blue, compact”. Luminosity-size diagrams are constructed for members of the two largest classes (“red, compact” and “blue, diffuse”), both in the UDF and the MGC. Artificial galaxy simulations (for systems with exponential and de Vaucouleurs profile shapes alternately) are used to identify “bias-free” regions of the luminosity-size plane in which galaxies are detected with high completeness, and their fluxes and sizes recovered with minimal surface brightness-dependent biases. Galaxy evolution is quantified via comparison of the low and high redshift luminosity-size relations within these “bias-free” regions. Results: We confirm the correlation between color-log(n) plane position and visual morphological type observed locally and in other high redshift studies in the color and/or structure domain. The combined effects of observational uncertainties, the morphological K-correction and cosmic variance preclude a robust statistical comparison of the shape of the MGC and UDF color-log(n) distributions. However, in the interval 0.75 < z <1.0 where the UDF i-band samples close to rest-frame B-band light (i.e., the morphological K-correction between our samples is negligible) we are able to present tentative evidence of bimodality, albiet for a very small sample size (17 galaxies). Our unique approach to quantifying selection and measurement biases in the luminosity-size plane highlights the need to consider errors in the recovery of both magnitudes and sizes, and their dependence on profile shape. Motivated by these results we divide our sample into the three color-structure classes mentioned above and quantify luminosity-size evolution by galaxy type. Specifically, we detect decreases in B-band, surface brightness of 1.57 ± 0.22 mag arcsec-2 and 1.65 ± 0.22 mag arcsec-2 for our “blue, diffuse” and “red, compact” classes respectively between redshift unity and the present day.

  5. Analysis of Duplicated Multiple-Samples Rank Data Using the Mack-Skillings Test.

    PubMed

    Carabante, Kennet Mariano; Alonso-Marenco, Jose Ramon; Chokumnoyporn, Napapan; Sriwattana, Sujinda; Prinyawiwatkul, Witoon

    2016-07-01

    Appropriate analysis for duplicated multiple-samples rank data is needed. This study compared analysis of duplicated rank preference data using the Friedman versus Mack-Skillings tests. Panelists (n = 125) ranked twice 2 orange juice sets: different-samples set (100%, 70%, vs. 40% juice) and similar-samples set (100%, 95%, vs. 90%). These 2 sample sets were designed to get contrasting differences in preference. For each sample set, rank sum data were obtained from (1) averaged rank data of each panelist from the 2 replications (n = 125), (2) rank data of all panelists from each of the 2 separate replications (n = 125 each), (3) jointed rank data of all panelists from the 2 replications (n = 125), and (4) rank data of all panelists pooled from the 2 replications (n = 250); rank data (1), (2), and (4) were separately analyzed by the Friedman test, although those from (3) by the Mack-Skillings test. The effect of sample sizes (n = 10 to 125) was evaluated. For the similar-samples set, higher variations in rank data from the 2 replications were observed; therefore, results of the main effects were more inconsistent among methods and sample sizes. Regardless of analysis methods, the larger the sample size, the higher the χ(2) value, the lower the P-value (testing H0 : all samples are not different). Analyzing rank data (2) separately by replication yielded inconsistent conclusions across sample sizes, hence this method is not recommended. The Mack-Skillings test was more sensitive than the Friedman test. Furthermore, it takes into account within-panelist variations and is more appropriate for analyzing duplicated rank data. © 2016 Institute of Food Technologists®

  6. The role of beaded activated carbon's surface oxygen groups on irreversible adsorption of organic vapors.

    PubMed

    Jahandar Lashaki, Masoud; Atkinson, John D; Hashisho, Zaher; Phillips, John H; Anderson, James E; Nichols, Mark

    2016-11-05

    The objective of this study is to determine the contribution of surface oxygen groups to irreversible adsorption (aka heel formation) during cyclic adsorption/regeneration of organic vapors commonly found in industrial systems, including vehicle-painting operations. For this purpose, three chemically modified activated carbon samples, including two oxygen-deficient (hydrogen-treated and heat-treated) and one oxygen-rich sample (nitric acid-treated) were prepared. The samples were tested for 5 adsorption/regeneration cycles using a mixture of nine organic compounds. For the different samples, mass balance cumulative heel was 14 and 20% higher for oxygen functionalized and hydrogen-treated samples, respectively, relative to heat-treated sample. Thermal analysis results showed heel formation due to physisorption for the oxygen-deficient samples, and weakened physisorption combined with chemisorption for the oxygen-rich sample. Chemisorption was attributed to consumption of surface oxygen groups by adsorbed species, resulting in formation of high boiling point oxidation byproducts or bonding between the adsorbates and the surface groups. Pore size distributions indicated that different pore sizes contributed to heel formation - narrow micropores (<7Å) in the oxygen-deficient samples and midsize micropores (7-12Å) in the oxygen-rich sample. The results from this study help explain the heel formation mechanism and how it relates to chemically tailored adsorbent materials. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Repopulation of calibrations with samples from the target site: effect of the size of the calibration.

    NASA Astrophysics Data System (ADS)

    Guerrero, C.; Zornoza, R.; Gómez, I.; Mataix-Solera, J.; Navarro-Pedreño, J.; Mataix-Beneyto, J.; García-Orenes, F.

    2009-04-01

    Near infrared (NIR) reflectance spectroscopy offers important advantages because is a non-destructive technique, the pre-treatments needed in samples are minimal, and the spectrum of the sample is obtained in less than 1 minute without the needs of chemical reagents. For these reasons, NIR is a fast and cost-effective method. Moreover, NIR allows the analysis of several constituents or parameters simultaneously from the same spectrum once it is obtained. For this, a needed steep is the development of soil spectral libraries (set of samples analysed and scanned) and calibrations (using multivariate techniques). The calibrations should contain the variability of the target site soils in which the calibration is to be used. Many times this premise is not easy to fulfil, especially in libraries recently developed. A classical way to solve this problem is through the repopulation of libraries and the subsequent recalibration of the models. In this work we studied the changes in the accuracy of the predictions as a consequence of the successive addition of samples to repopulation. In general, calibrations with high number of samples and high diversity are desired. But we hypothesized that calibrations with lower quantities of samples (lower size) will absorb more easily the spectral characteristics of the target site. Thus, we suspect that the size of the calibration (model) that will be repopulated could be important. For this reason we also studied this effect in the accuracy of predictions of the repopulated models. In this study we used those spectra of our library which contained data of soil Kjeldahl Nitrogen (NKj) content (near to 1500 samples). First, those spectra from the target site were removed from the spectral library. Then, different quantities of samples of the library were selected (representing the 5, 10, 25, 50, 75 and 100% of the total library). These samples were used to develop calibrations with different sizes (%) of samples. We used partial least squares regression, and leave-one-out cross validation as methods of calibration. Two methods were used to select the different quantities (size of models) of samples: (1) Based on Characteristics of Spectra (BCS), and (2) Based on NKj Values of Samples (BVS). Both methods tried to select representative samples. Each of the calibrations (containing the 5, 10, 25, 50, 75 or 100% of the total samples of the library) was repopulated with samples from the target site and then recalibrated (by leave-one-out cross validation). This procedure was sequential. In each step, 2 samples from the target site were added to the models, and then recalibrated. This process was repeated successively 10 times, being 20 the total number of samples added. A local model was also created with the 20 samples used for repopulation. The repopulated, non-repopulated and local calibrations were used to predict the NKj content in those samples from the target site not included in repopulations. For the measurement of the accuracy of the predictions, the r2, RMSEP and slopes were calculated comparing predicted with analysed NKj values. This scheme was repeated for each of the four target sites studied. In general, scarce differences can be found between results obtained with BCS and BVS models. We observed that the repopulation of models increased the r2 of the predictions in sites 1 and 3. The repopulation caused scarce changes of the r2 of the predictions in sites 2 and 4, maybe due to the high initial values (using non-repopulated models r2 >0.90). As consequence of repopulation, the RMSEP decreased in all the sites except in site 2, where a very low RMESP was obtained before the repopulation (0.4 g×kg-1). The slopes trended to approximate to 1, but this value was reached only in site 4 and after the repopulation with 20 samples. In sites 3 and 4, accurate predictions were obtained using the local models. Predictions obtained with models using similar size of samples (similar %) were averaged with the aim to describe the main patterns. The r2 of predictions obtained with models of higher size were not more accurate than those obtained with models of lower size. After repopulation, the RMSEP of predictions using models with lower sizes (5, 10 and 25% of samples of the library) were lower than RMSEP obtained with higher sizes (75 and 100%), indicating that small models can easily integrate the variability of the soils from the target site. The results suggest that calibrations of small size could be repopulated and "converted" in local calibrations. According to this, we can focus most of the efforts in the obtainment of highly accurate analytical values in a reduced set of samples (including some samples from the target sites). The patterns observed here are in opposition with the idea of global models. These results could encourage the expansion of this technique, because very large data based seems not to be needed. Future studies with very different samples will help to confirm the robustness of the patterns observed. Authors acknowledge to "Bancaja-UMH" for the financial support of the project "NIRPROS".

  8. A microfluidic platform for precision small-volume sample processing and its use to size separate biological particles with an acoustic microdevice [Precision size separation of biological particles in small-volume samples by an acoustic microfluidic system

    DOE PAGES

    Fong, Erika J.; Huang, Chao; Hamilton, Julie; ...

    2015-11-23

    Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection,more » system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.« less

  9. New Measurements of the Particle Size Distribution of Apollo 11 Lunar Soil 10084

    NASA Technical Reports Server (NTRS)

    McKay, D.S.; Cooper, B.L.; Riofrio, L.M.

    2009-01-01

    We have initiated a major new program to determine the grain size distribution of nearly all lunar soils collected in the Apollo program. Following the return of Apollo soil and core samples, a number of investigators including our own group performed grain size distribution studies and published the results [1-11]. Nearly all of these studies were done by sieving the samples, usually with a working fluid such as Freon(TradeMark) or water. We have measured the particle size distribution of lunar soil 10084,2005 in water, using a Microtrac(TradeMark) laser diffraction instrument. Details of our own sieving technique and protocol (also used in [11]). are given in [4]. While sieving usually produces accurate and reproducible results, it has disadvantages. It is very labor intensive and requires hours to days to perform properly. Even using automated sieve shaking devices, four or five days may be needed to sieve each sample, although multiple sieve stacks increases productivity. Second, sieving is subject to loss of grains through handling and weighing operations, and these losses are concentrated in the finest grain sizes. Loss from handling becomes a more acute problem when smaller amounts of material are used. While we were able to quantitatively sieve into 6 or 8 size fractions using starting soil masses as low as 50mg, attrition and handling problems limit the practicality of sieving smaller amounts. Third, sieving below 10 or 20microns is not practical because of the problems of grain loss, and smaller grains sticking to coarser grains. Sieving is completely impractical below about 5- 10microns. Consequently, sieving gives no information on the size distribution below approx.10 microns which includes the important submicrometer and nanoparticle size ranges. Finally, sieving creates a limited number of size bins and may therefore miss fine structure of the distribution which would be revealed by other methods that produce many smaller size bins.

  10. Study design requirements for RNA sequencing-based breast cancer diagnostics.

    PubMed

    Mer, Arvind Singh; Klevebring, Daniel; Grönberg, Henrik; Rantalainen, Mattias

    2016-02-01

    Sequencing-based molecular characterization of tumors provides information required for individualized cancer treatment. There are well-defined molecular subtypes of breast cancer that provide improved prognostication compared to routine biomarkers. However, molecular subtyping is not yet implemented in routine breast cancer care. Clinical translation is dependent on subtype prediction models providing high sensitivity and specificity. In this study we evaluate sample size and RNA-sequencing read requirements for breast cancer subtyping to facilitate rational design of translational studies. We applied subsampling to ascertain the effect of training sample size and the number of RNA sequencing reads on classification accuracy of molecular subtype and routine biomarker prediction models (unsupervised and supervised). Subtype classification accuracy improved with increasing sample size up to N = 750 (accuracy = 0.93), although with a modest improvement beyond N = 350 (accuracy = 0.92). Prediction of routine biomarkers achieved accuracy of 0.94 (ER) and 0.92 (Her2) at N = 200. Subtype classification improved with RNA-sequencing library size up to 5 million reads. Development of molecular subtyping models for cancer diagnostics requires well-designed studies. Sample size and the number of RNA sequencing reads directly influence accuracy of molecular subtyping. Results in this study provide key information for rational design of translational studies aiming to bring sequencing-based diagnostics to the clinic.

  11. Experimental strategies for imaging bioparticles with femtosecond hard X-ray pulses

    DOE PAGES

    Daurer, Benedikt J.; Okamoto, Kenta; Bielecki, Johan; ...

    2017-04-07

    This study explores the capabilities of the Coherent X-ray Imaging Instrument at the Linac Coherent Light Source to image small biological samples. The weak signal from small samples puts a significant demand on the experiment. AerosolizedOmono River virusparticles of ~40 nm in diameter were injected into the submicrometre X-ray focus at a reduced pressure. Diffraction patterns were recorded on two area detectors. The statistical nature of the measurements from many individual particles provided information about the intensity profile of the X-ray beam, phase variations in the wavefront and the size distribution of the injected particles. The results point to amore » wider than expected size distribution (from ~35 to ~300 nm in diameter). This is likely to be owing to nonvolatile contaminants from larger droplets during aerosolization and droplet evaporation. The results suggest that the concentration of nonvolatile contaminants and the ratio between the volumes of the initial droplet and the sample particles is critical in such studies. The maximum beam intensity in the focus was found to be 1.9 × 10 12photons per µm 2per pulse. The full-width of the focus at half-maximum was estimated to be 500 nm (assuming 20% beamline transmission), and this width is larger than expected. Under these conditions, the diffraction signal from a sample-sized particle remained above the average background to a resolution of 4.25 nm. Finally, the results suggest that reducing the size of the initial droplets during aerosolization is necessary to bring small particles into the scope of detailed structural studies with X-ray lasers.« less

  12. Body mass estimates of hominin fossils and the evolution of human body size.

    PubMed

    Grabowski, Mark; Hatala, Kevin G; Jungers, William L; Richmond, Brian G

    2015-08-01

    Body size directly influences an animal's place in the natural world, including its energy requirements, home range size, relative brain size, locomotion, diet, life history, and behavior. Thus, an understanding of the biology of extinct organisms, including species in our own lineage, requires accurate estimates of body size. Since the last major review of hominin body size based on postcranial morphology over 20 years ago, new fossils have been discovered, species attributions have been clarified, and methods improved. Here, we present the most comprehensive and thoroughly vetted set of individual fossil hominin body mass predictions to date, and estimation equations based on a large (n = 220) sample of modern humans of known body masses. We also present species averages based exclusively on fossils with reliable taxonomic attributions, estimates of species averages by sex, and a metric for levels of sexual dimorphism. Finally, we identify individual traits that appear to be the most reliable for mass estimation for each fossil species, for use when only one measurement is available for a fossil. Our results show that many early hominins were generally smaller-bodied than previously thought, an outcome likely due to larger estimates in previous studies resulting from the use of large-bodied modern human reference samples. Current evidence indicates that modern human-like large size first appeared by at least 3-3.5 Ma in some Australopithecus afarensis individuals. Our results challenge an evolutionary model arguing that body size increased from Australopithecus to early Homo. Instead, we show that there is no reliable evidence that the body size of non-erectus early Homo differed from that of australopiths, and confirm that Homo erectus evolved larger average body size than earlier hominins. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. QA/QC requirements for physical properties sampling and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Innis, B.E.

    1993-07-21

    This report presents results of an assessment of the available information concerning US Environmental Protection Agency (EPA) quality assurance/quality control (QA/QC) requirements and guidance applicable to sampling, handling, and analyzing physical parameter samples at Comprehensive Environmental Restoration, Compensation, and Liability Act (CERCLA) investigation sites. Geotechnical testing laboratories measure the following physical properties of soil and sediment samples collected during CERCLA remedial investigations (RI) at the Hanford Site: moisture content, grain size by sieve, grain size by hydrometer, specific gravity, bulk density/porosity, saturated hydraulic conductivity, moisture retention, unsaturated hydraulic conductivity, and permeability of rocks by flowing air. Geotechnical testing laboratories alsomore » measure the following chemical parameters of soil and sediment samples collected during Hanford Site CERCLA RI: calcium carbonate and saturated column leach testing. Physical parameter data are used for (1) characterization of vadose and saturated zone geology and hydrogeology, (2) selection of monitoring well screen sizes, (3) to support modeling and analysis of the vadose and saturated zones, and (4) for engineering design. The objectives of this report are to determine the QA/QC levels accepted in the EPA Region 10 for the sampling, handling, and analysis of soil samples for physical parameters during CERCLA RI.« less

  14. Using the Sampling Margin of Error to Assess the Interpretative Validity of Student Evaluations of Teaching

    ERIC Educational Resources Information Center

    James, David E.; Schraw, Gregory; Kuch, Fred

    2015-01-01

    We present an equation, derived from standard statistical theory, that can be used to estimate sampling margin of error for student evaluations of teaching (SETs). We use the equation to examine the effect of sample size, response rates and sample variability on the estimated sampling margin of error, and present results in four tables that allow…

  15. Instrumental neutron activation analysis for studying size-fractionated aerosols

    NASA Astrophysics Data System (ADS)

    Salma, Imre; Zemplén-Papp, Éva

    1999-10-01

    Instrumental neutron activation analysis (INAA) was utilized for studying aerosol samples collected into a coarse and a fine size fraction on Nuclepore polycarbonate membrane filters. As a result of the panoramic INAA, 49 elements were determined in an amount of about 200-400 μg of particulate matter by two irradiations and four γ-spectrometric measurements. The analytical calculations were performed by the absolute ( k0) standardization method. The calibration procedures, application protocol and the data evaluation process are described and discussed. They make it possible now to analyse a considerable number of samples, with assuring the quality of the results. As a means of demonstrating the system's analytical capabilities, the concentration ranges, median or mean atmospheric concentrations and detection limits are presented for an extensive series of aerosol samples collected within the framework of an urban air pollution study in Budapest. For most elements, the precision of the analysis was found to be beyond the uncertainty represented by the sampling techniques and sample variability.

  16. Standardized mean differences cause funnel plot distortion in publication bias assessments.

    PubMed

    Zwetsloot, Peter-Paul; Van Der Naald, Mira; Sena, Emily S; Howells, David W; IntHout, Joanna; De Groot, Joris Ah; Chamuleau, Steven Aj; MacLeod, Malcolm R; Wever, Kimberley E

    2017-09-08

    Meta-analyses are increasingly used for synthesis of evidence from biomedical research, and often include an assessment of publication bias based on visual or analytical detection of asymmetry in funnel plots. We studied the influence of different normalisation approaches, sample size and intervention effects on funnel plot asymmetry, using empirical datasets and illustrative simulations. We found that funnel plots of the Standardized Mean Difference (SMD) plotted against the standard error (SE) are susceptible to distortion, leading to overestimation of the existence and extent of publication bias. Distortion was more severe when the primary studies had a small sample size and when an intervention effect was present. We show that using the Normalised Mean Difference measure as effect size (when possible), or plotting the SMD against a sample size-based precision estimate, are more reliable alternatives. We conclude that funnel plots using the SMD in combination with the SE are unsuitable for publication bias assessments and can lead to false-positive results.

  17. Standardized mean differences cause funnel plot distortion in publication bias assessments

    PubMed Central

    Van Der Naald, Mira; Sena, Emily S; Howells, David W; IntHout, Joanna; De Groot, Joris AH; Chamuleau, Steven AJ; MacLeod, Malcolm R

    2017-01-01

    Meta-analyses are increasingly used for synthesis of evidence from biomedical research, and often include an assessment of publication bias based on visual or analytical detection of asymmetry in funnel plots. We studied the influence of different normalisation approaches, sample size and intervention effects on funnel plot asymmetry, using empirical datasets and illustrative simulations. We found that funnel plots of the Standardized Mean Difference (SMD) plotted against the standard error (SE) are susceptible to distortion, leading to overestimation of the existence and extent of publication bias. Distortion was more severe when the primary studies had a small sample size and when an intervention effect was present. We show that using the Normalised Mean Difference measure as effect size (when possible), or plotting the SMD against a sample size-based precision estimate, are more reliable alternatives. We conclude that funnel plots using the SMD in combination with the SE are unsuitable for publication bias assessments and can lead to false-positive results. PMID:28884685

  18. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    PubMed

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.

  19. 40 CFR 86.1845-04 - Manufacturer in-use verification testing requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of test vehicles in the sample comply with the sample size requirements of this section. Any post... HDV must test, or cause to have tested, a specified number of vehicles. Such testing must be conducted... first test will be considered the official results for the test vehicle, regardless of any test results...

  20. ZnFe2O4 nanoparticles dispersed in a highly porous silica aerogel matrix: a magnetic study.

    PubMed

    Bullita, S; Casu, A; Casula, M F; Concas, G; Congiu, F; Corrias, A; Falqui, A; Loche, D; Marras, C

    2014-03-14

    We report the detailed structural characterization and magnetic investigation of nanocrystalline zinc ferrite nanoparticles supported on a silica aerogel porous matrix which differ in size (in the range 4-11 nm) and the inversion degree (from 0.4 to 0.2) as compared to bulk zinc ferrite which has a normal spinel structure. The samples were investigated by zero-field-cooling-field-cooling, thermo-remnant DC magnetization measurements, AC magnetization investigation and Mössbauer spectroscopy. The nanocomposites are superparamagnetic at room temperature; the temperature of the superparamagnetic transition in the samples decreases with the particle size and therefore it is mainly determined by the inversion degree rather than by the particle size, which would give an opposite effect on the blocking temperature. The contribution of particle interaction to the magnetic behavior of the nanocomposites decreases significantly in the sample with the largest particle size. The values of the anisotropy constant give evidence that the anisotropy constant decreases upon increasing the particle size of the samples. All these results clearly indicate that, even when dispersed with low concentration in a non-magnetic and highly porous and insulating matrix, the zinc ferrite nanoparticles show a magnetic behavior similar to that displayed when they are unsupported or dispersed in a similar but denser matrix, and with higher loading. The effective anisotropy measured for our samples appears to be systematically higher than that measured for supported zinc ferrite nanoparticles of similar size, indicating that this effect probably occurs as a consequence of the high inversion degree.

  1. A size-dependent constitutive model of bulk metallic glasses in the supercooled liquid region

    PubMed Central

    Yao, Di; Deng, Lei; Zhang, Mao; Wang, Xinyun; Tang, Na; Li, Jianjun

    2015-01-01

    Size effect is of great importance in micro forming processes. In this paper, micro cylinder compression was conducted to investigate the deformation behavior of bulk metallic glasses (BMGs) in supercooled liquid region with different deformation variables including sample size, temperature and strain rate. It was found that the elastic and plastic behaviors of BMGs have a strong dependence on the sample size. The free volume and defect concentration were introduced to explain the size effect. In order to demonstrate the influence of deformation variables on steady stress, elastic modulus and overshoot phenomenon, four size-dependent factors were proposed to construct a size-dependent constitutive model based on the Maxwell-pulse type model previously presented by the authors according to viscosity theory and free volume model. The proposed constitutive model was then adopted in finite element method simulations, and validated by comparing the micro cylinder compression and micro double cup extrusion experimental data with the numerical results. Furthermore, the model provides a new approach to understanding the size-dependent plastic deformation behavior of BMGs. PMID:25626690

  2. Multipinhole SPECT helical scan parameters and imaging volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Rutao, E-mail: rutaoyao@buffalo.edu; Deng, Xiao; Wei, Qingyang

    Purpose: The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. Methods: The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluatedmore » by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. Results: The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. Conclusions: The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters.« less

  3. Laboratory evaluation of the Sequoia Scientific LISST-ABS acoustic backscatter sediment sensor

    USGS Publications Warehouse

    Snazelle, Teri T.

    2017-12-18

    Sequoia Scientific’s LISST-ABS is an acoustic backscatter sensor designed to measure suspended-sediment concentration at a point source. Three LISST-ABS were evaluated at the U.S. Geological Survey (USGS) Hydrologic Instrumentation Facility (HIF). Serial numbers 6010, 6039, and 6058 were assessed for accuracy in solutions with varying particle-size distributions and for the effect of temperature on sensor accuracy. Certified sediment samples composed of different ranges of particle size were purchased from Powder Technology Inc. These sediment samples were 30–80-micron (µm) Arizona Test Dust; less than 22-µm ISO 12103-1, A1 Ultrafine Test Dust; and 149-µm MIL-STD 810E Silica Dust. The sensor was able to accurately measure suspended-sediment concentration when calibrated with sediment of the same particle-size distribution as the measured. Overall testing demonstrated that sensors calibrated with finer sized sediments overdetect sediment concentrations with coarser sized sediments, and sensors calibrated with coarser sized sediments do not detect increases in sediment concentrations from small and fine sediments. These test results are not unexpected for an acoustic-backscatter device and stress the need for using accurate site-specific particle-size distributions during sensor calibration. When calibrated for ultrafine dust with a less than 22-µm particle size (silt) and with the Arizona Test Dust with a 30–80-µm range, the data from sensor 6039 were biased high when fractions of the coarser (149-µm) Silica Dust were added. Data from sensor 6058 showed similar results with an elevated response to coarser material when calibrated with a finer particle-size distribution and a lack of detection when subjected to finer particle-size sediment. Sensor 6010 was also tested for the effect of dissimilar particle size during the calibration and showed little effect. Subsequent testing revealed problems with this sensor, including an inadequate temperature compensation, making this data questionable. The sensor was replaced by Sequoia Scientific with serial number 6039. Results from the extended temperature testing showed proper temperature compensation for sensor 6039, and results from the dissimilar calibration/testing particle-size distribution closely corroborated the results from sensor 6058.

  4. X-ray studies of aluminum alloy of the Al-Mg-Si system subjected to SPD processing

    NASA Astrophysics Data System (ADS)

    Sitdikov, V. D.; Murashkin, M. Yu; Khasanov, M. R.; Kasatkin, I. A.; Chizhov, P. S.; Bobruk, E. V.

    2014-08-01

    Recently it has been established that during high pressure torsion dynamic aging takes place in aluminum Al-Mg-Si alloys resulting in formation of nanosized particles of strengthening phases in the aluminum matrix, which greatly improves the electrical conductivity and strength properties. In the present paper structural characterization of ultrafine-grained (UFG) samples of aluminum 6201 alloy produced by severe plastic deformation (SPD) was performed using X-ray diffraction analysis. As a result, structure features (lattice parameter, size of coherent scattering domains) after dynamic aging of UFG samples were determined. The size and distribution of second- phase particles in the Al matrix were assessed with regard to HPT regimes. Impact of the size and distribution of the formed secondary phases on the strength, ductility and electrical conductivity is discussed.

  5. Evaluating multi-level models to test occupancy state responses of Plethodontid salamanders

    USGS Publications Warehouse

    Kroll, Andrew J.; Garcia, Tiffany S.; Jones, Jay E.; Dugger, Catherine; Murden, Blake; Johnson, Josh; Peerman, Summer; Brintz, Ben; Rochelle, Michael

    2015-01-01

    Plethodontid salamanders are diverse and widely distributed taxa and play critical roles in ecosystem processes. Due to salamander use of structurally complex habitats, and because only a portion of a population is available for sampling, evaluation of sampling designs and estimators is critical to provide strong inference about Plethodontid ecology and responses to conservation and management activities. We conducted a simulation study to evaluate the effectiveness of multi-scale and hierarchical single-scale occupancy models in the context of a Before-After Control-Impact (BACI) experimental design with multiple levels of sampling. Also, we fit the hierarchical single-scale model to empirical data collected for Oregon slender and Ensatina salamanders across two years on 66 forest stands in the Cascade Range, Oregon, USA. All models were fit within a Bayesian framework. Estimator precision in both models improved with increasing numbers of primary and secondary sampling units, underscoring the potential gains accrued when adding secondary sampling units. Both models showed evidence of estimator bias at low detection probabilities and low sample sizes; this problem was particularly acute for the multi-scale model. Our results suggested that sufficient sample sizes at both the primary and secondary sampling levels could ameliorate this issue. Empirical data indicated Oregon slender salamander occupancy was associated strongly with the amount of coarse woody debris (posterior mean = 0.74; SD = 0.24); Ensatina occupancy was not associated with amount of coarse woody debris (posterior mean = -0.01; SD = 0.29). Our simulation results indicate that either model is suitable for use in an experimental study of Plethodontid salamanders provided that sample sizes are sufficiently large. However, hierarchical single-scale and multi-scale models describe different processes and estimate different parameters. As a result, we recommend careful consideration of study questions and objectives prior to sampling data and fitting models.

  6. Concentrations of selected constituents in surface-water and streambed-sediment samples collected from streams in and near an area of oil and natural-gas development, south-central Texas, 2011-13

    USGS Publications Warehouse

    Opsahl, Stephen P.; Crow, Cassi L.

    2014-01-01

    During collection of streambed-sediment samples, additional samples from a subset of three sites (the SAR Elmendorf, SAR 72, and SAR McFaddin sites) were processed by using a 63-µm sieve on one aliquot and a 2-mm sieve on a second aliquot for PAH and n-alkane analyses. The purpose of analyzing PAHs and n-alkanes on a sample containing sand, silt, and clay versus a sample containing only silt and clay was to provide data that could be used to determine if these organic constituents had a greater affinity for silt- and clay-sized particles relative to sand-sized particles. The greater concentrations of PAHs in the <63-μm size-fraction samples at all three of these sites are consistent with a greater percentage of binding sites associated with fine-grained (<63 μm) sediment versus coarse-grained (<2 mm) sediment. The larger difference in total PAHs between the <2-mm and <63-μm size-fraction samples at the SAR Elmendorf site might be related to the large percentage of sand in the <2-mm size-fraction sample which was absent in the <63-μm size-fraction sample. In contrast, the <2-mm size-fraction sample collected from the SAR McFaddin site contained very little sand and was similar in particle-size composition to the <63-μm size-fraction sample.

  7. Performance of small cluster surveys and the clustered LQAS design to estimate local-level vaccination coverage in Mali

    PubMed Central

    2012-01-01

    Background Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. Methods We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. Results VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Conclusions Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes. PMID:23057445

  8. A Fracture Mechanics Approach to Thermal Shock Investigation in Alumina-Based Refractory

    NASA Astrophysics Data System (ADS)

    Volkov-Husović, T.; Heinemann, R. Jančić; Mitraković, D.

    2008-02-01

    The thermal shock behavior of large grain size, alumina-based refractories was investigated experimentally using a standard water quench test. A mathematical model was employed to simulate the thermal stability behavior. Behavior of the samples under repeated thermal shock was monitored using ultrasonic measurements of dynamic Young's modulus. Image analysis was used to observe the extent of surface degradation. Analysis of the obtained results for the behavior of large grain size samples under conditions of rapid temperature changes is given.

  9. Interlinking backscatter, grain size and benthic community structure

    NASA Astrophysics Data System (ADS)

    McGonigle, Chris; Collier, Jenny S.

    2014-06-01

    The relationship between acoustic backscatter, sediment grain size and benthic community structure is examined using three different quantitative methods, covering image- and angular response-based approaches. Multibeam time-series backscatter (300 kHz) data acquired in 2008 off the coast of East Anglia (UK) are compared with grain size properties, macrofaunal abundance and biomass from 130 Hamon and 16 Clamshell grab samples. Three predictive methods are used: 1) image-based (mean backscatter intensity); 2) angular response-based (predicted mean grain size), and 3) image-based (1st principal component and classification) from Quester Tangent Corporation Multiview software. Relationships between grain size and backscatter are explored using linear regression. Differences in grain size and benthic community structure between acoustically defined groups are examined using ANOVA and PERMANOVA+. Results for the Hamon grab stations indicate significant correlations between measured mean grain size and mean backscatter intensity, angular response predicted mean grain size, and 1st principal component of QTC analysis (all p < 0.001). Results for the Clamshell grab for two of the methods have stronger positive correlations; mean backscatter intensity (r2 = 0.619; p < 0.001) and angular response predicted mean grain size (r2 = 0.692; p < 0.001). ANOVA reveals significant differences in mean grain size (Hamon) within acoustic groups for all methods: mean backscatter (p < 0.001), angular response predicted grain size (p < 0.001), and QTC class (p = 0.009). Mean grain size (Clamshell) shows a significant difference between groups for mean backscatter (p = 0.001); other methods were not significant. PERMANOVA for the Hamon abundance shows benthic community structure was significantly different between acoustic groups for all methods (p ≤ 0.001). Overall these results show considerable promise in that more than 60% of the variance in the mean grain size of the Clamshell grab samples can be explained by mean backscatter or acoustically-predicted grain size. These results show that there is significant predictive capacity for sediment characteristics from multibeam backscatter and that these acoustic classifications can have ecological validity.

  10. Setting health research priorities using the CHNRI method: VI. Quantitative properties of human collective opinion

    PubMed Central

    Yoshida, Sachiyo; Rudan, Igor; Cousens, Simon

    2016-01-01

    Introduction Crowdsourcing has become an increasingly important tool to address many problems – from government elections in democracies, stock market prices, to modern online tools such as TripAdvisor or Internet Movie Database (IMDB). The CHNRI method (the acronym for the Child Health and Nutrition Research Initiative) for setting health research priorities has crowdsourcing as the major component, which it uses to generate, assess and prioritize between many competing health research ideas. Methods We conducted a series of analyses using data from a group of 91 scorers to explore the quantitative properties of their collective opinion. We were interested in the stability of their collective opinion as the sample size increases from 15 to 90. From a pool of 91 scorers who took part in a previous CHNRI exercise, we used sampling with replacement to generate multiple random samples of different size. First, for each sample generated, we identified the top 20 ranked research ideas, among 205 that were proposed and scored, and calculated the concordance with the ranking generated by the 91 original scorers. Second, we used rank correlation coefficients to compare the ranks assigned to all 205 proposed research ideas when samples of different size are used. We also analysed the original pool of 91 scorers to to look for evidence of scoring variations based on scorers' characteristics. Results The sample sizes investigated ranged from 15 to 90. The concordance for the top 20 scored research ideas increased with sample sizes up to about 55 experts. At this point, the median level of concordance stabilized at 15/20 top ranked questions (75%), with the interquartile range also generally stable (14–16). There was little further increase in overlap when the sample size increased from 55 to 90. When analysing the ranking of all 205 ideas, the rank correlation coefficient increased as the sample size increased, with a median correlation of 0.95 reached at the sample size of 45 experts (median of the rank correlation coefficient = 0.95; IQR 0.94–0.96). Conclusions Our analyses suggest that the collective opinion of an expert group on a large number of research ideas, expressed through categorical variables (Yes/No/Not Sure/Don't know), stabilises relatively quickly in terms of identifying the ideas that have most support. In the exercise we found a high degree of reproducibility of the identified research priorities was achieved with as few as 45–55 experts. PMID:27350874

  11. Topological Analysis and Gaussian Decision Tree: Effective Representation and Classification of Biosignals of Small Sample Size.

    PubMed

    Zhang, Zhifei; Song, Yang; Cui, Haochen; Wu, Jayne; Schwartz, Fernando; Qi, Hairong

    2017-09-01

    Bucking the trend of big data, in microdevice engineering, small sample size is common, especially when the device is still at the proof-of-concept stage. The small sample size, small interclass variation, and large intraclass variation, have brought biosignal analysis new challenges. Novel representation and classification approaches need to be developed to effectively recognize targets of interests with the absence of a large training set. Moving away from the traditional signal analysis in the spatiotemporal domain, we exploit the biosignal representation in the topological domain that would reveal the intrinsic structure of point clouds generated from the biosignal. Additionally, we propose a Gaussian-based decision tree (GDT), which can efficiently classify the biosignals even when the sample size is extremely small. This study is motivated by the application of mastitis detection using low-voltage alternating current electrokinetics (ACEK) where five categories of bisignals need to be recognized with only two samples in each class. Experimental results demonstrate the robustness of the topological features as well as the advantage of GDT over some conventional classifiers in handling small dataset. Our method reduces the voltage of ACEK to a safe level and still yields high-fidelity results with a short assay time. This paper makes two distinctive contributions to the field of biosignal analysis, including performing signal processing in the topological domain and handling extremely small dataset. Currently, there have been no related works that can efficiently tackle the dilemma between avoiding electrochemical reaction and accelerating assay process using ACEK.

  12. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    PubMed

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the measurement of hours worked each week by GPs strongly varied according to the number of GPs included and the frequency of measurements per GP during the week measured. The best balance between both dimensions will depend upon different circumstances, such as the target group and the budget available.

  13. Conceptual data sampling for breast cancer histology image classification.

    PubMed

    Rezk, Eman; Awan, Zainab; Islam, Fahad; Jaoua, Ali; Al Maadeed, Somaya; Zhang, Nan; Das, Gautam; Rajpoot, Nasir

    2017-10-01

    Data analytics have become increasingly complicated as the amount of data has increased. One technique that is used to enable data analytics in large datasets is data sampling, in which a portion of the data is selected to preserve the data characteristics for use in data analytics. In this paper, we introduce a novel data sampling technique that is rooted in formal concept analysis theory. This technique is used to create samples reliant on the data distribution across a set of binary patterns. The proposed sampling technique is applied in classifying the regions of breast cancer histology images as malignant or benign. The performance of our method is compared to other classical sampling methods. The results indicate that our method is efficient and generates an illustrative sample of small size. It is also competing with other sampling methods in terms of sample size and sample quality represented in classification accuracy and F1 measure. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Structure and properties of clinical coralline implants measured via 3D imaging and analysis.

    PubMed

    Knackstedt, Mark Alexander; Arns, Christoph H; Senden, Tim J; Gross, Karlis

    2006-05-01

    The development and design of advanced porous materials for biomedical applications requires a thorough understanding of how material structure impacts on mechanical and transport properties. This paper illustrates a 3D imaging and analysis study of two clinically proven coral bone graft samples (Porites and Goniopora). Images are obtained from X-ray micro-computed tomography (micro-CT) at a resolution of 16.8 microm. A visual comparison of the two images shows very different structure; Porites has a homogeneous structure and consistent pore size while Goniopora has a bimodal pore size and a strongly disordered structure. A number of 3D structural characteristics are measured directly on the images including pore volume-to-surface-area, pore and solid size distributions, chord length measurements and tortuosity. Computational results made directly on the digitized tomographic images are presented for the permeability, diffusivity and elastic modulus of the coral samples. The results allow one to quantify differences between the two samples. 3D digital analysis can provide a more thorough assessment of biomaterial structure including the pore wall thickness, local flow, mechanical properties and diffusion pathways. We discuss the implications of these results to the development of optimal scaffold design for tissue ingrowth.

  15. Methodological quality of behavioural weight loss studies: a systematic review

    PubMed Central

    Lemon, S. C.; Wang, M. L.; Haughton, C. F.; Estabrook, D. P.; Frisard, C. F.; Pagoto, S. L.

    2018-01-01

    Summary This systematic review assessed the methodological quality of behavioural weight loss intervention studies conducted among adults and associations between quality and statistically significant weight loss outcome, strength of intervention effectiveness and sample size. Searches for trials published between January, 2009 and December, 2014 were conducted using PUBMED, MEDLINE and PSYCINFO and identified ninety studies. Methodological quality indicators included study design, anthropometric measurement approach, sample size calculations, intent-to-treat (ITT) analysis, loss to follow-up rate, missing data strategy, sampling strategy, report of treatment receipt and report of intervention fidelity (mean = 6.3). Indicators most commonly utilized included randomized design (100%), objectively measured anthropometrics (96.7%), ITT analysis (86.7%) and reporting treatment adherence (76.7%). Most studies (62.2%) had a follow-up rate >75% and reported a loss to follow-up analytic strategy or minimal missing data (69.9%). Describing intervention fidelity (34.4%) and sampling from a known population (41.1%) were least common. Methodological quality was not associated with reporting a statistically significant result, effect size or sample size. This review found the published literature of behavioural weight loss trials to be of high quality for specific indicators, including study design and measurement. Identified for improvement include utilization of more rigorous statistical approaches to loss to follow up and better fidelity reporting. PMID:27071775

  16. The local environment of ice particles in arctic mixed-phase clouds

    NASA Astrophysics Data System (ADS)

    Schlenczek, Oliver; Fugal, Jacob P.; Schledewitz, Waldemar; Borrmann, Stephan

    2015-04-01

    During the RACEPAC field campaign in April and May 2014, research flights were made with the Polar 5 and Polar 6 aircraft from the Alfred Wegener Institute in Arctic clouds near Inuvik, Northwest Territories, Canada. One flight with the Polar 6 aircraft, done on May 16, 2014, flew under precipitating, stratiform, mid-level clouds with several penetrations through cloud base. Measurements with HALOHolo, an airborne digital in-line holographic instrument for cloud particles, show ice particles in a field of other cloud particles in a local three-dimensional sample volume (~14x19x130 mm3 or ~35 cm^3). Each holographic sample volume is a snapshot of a 3-dimensional piece of cloud at the cm-scale with typically thousands of cloud droplets per sample volume, so each sample volume yields a statistically significant droplet size distribution. Holograms are recorded at a rate of six times per second, which provides one volume sample approx. every 12 meters along the flight path. The size resolution limit for cloud droplets is better than 1 µm due to advanced sizing algorithms. Shown are preliminary results of, (1) the ice/liquid water partitioning at the cloud base and the distribution of water droplets around each ice particle, and (2) spatial and temporal variability of the cloud droplet size distributions at cloud base.

  17. Laboratory and Airborne BRDF Analysis of Vegetation Leaves and Soil Samples

    NASA Technical Reports Server (NTRS)

    Georgiev, Georgi T.; Gatebe, Charles K.; Butler, James J.; King, Michael D.

    2008-01-01

    Laboratory-based Bidirectional Reflectance Distribution Function (BRDF) analysis of vegetation leaves, soil, and leaf litter samples is presented. The leaf litter and soil samples, numbered 1 and 2, were obtained from a site located in the savanna biome of South Africa (Skukuza: 25.0degS, 31.5degE). A third soil sample, number 3, was obtained from Etosha Pan, Namibia (19.20degS, 15.93degE, alt. 1100 m). In addition, BRDF of local fresh and dry leaves from tulip tree (Liriodendron tulipifera) and acacia tree (Acacia greggii) were studied. It is shown how the BRDF depends on the incident and scatter angles, sample size (i.e. crushed versus whole leaf,) soil samples fraction size, sample status (i.e. fresh versus dry leaves), vegetation species (poplar versus acacia), and vegetation s biochemical composition. As a demonstration of the application of the results of this study, airborne BRDF measurements acquired with NASA's Cloud Absorption Radiometer (CAR) over the same general site where the soil and leaf litter samples were obtained are compared to the laboratory results. Good agreement between laboratory and airborne measured BRDF is reported.

  18. The fundamentals of average local variance--Part II: Sampling simple regular patterns with optical imagery.

    PubMed

    Bøcher, Peder Klith; McCloy, Keith R

    2006-02-01

    In this investigation, the characteristics of the average local variance (ALV) function is investigated through the acquisition of images at different spatial resolutions of constructed scenes of regular patterns of black and white squares. It is shown that the ALV plot consistently peaks at a spatial resolution in which the pixels has a size corresponding to half the distance between scene objects, and that, under very specific conditions, it also peaks at a spatial resolution in which the pixel size corresponds to the whole distance between scene objects. It is argued that the peak at object distance when present is an expression of the Nyquist sample rate. The presence of this peak is, hence, shown to be a function of the matching between the phase of the scene pattern and the phase of the sample grid, i.e., the image. When these phases match, a clear and distinct peak is produced on the ALV plot. The fact that the peak at half the distance consistently occurs in the ALV plot is linked to the circumstance that the sampling interval (distance between pixels) and the extent of the sampling unit (size of pixels) are equal. Hence, at twice the Nyquist sampling rate, each fundamental period of the pattern is covered by four pixels; therefore, at least one pixel is always completely embedded within one pattern element, regardless of sample scene phase. If the objects in the scene are scattered with a distance larger than their extent, the peak will be related to the size by a factor larger than 1/2. This is suggested to be the explanation to the results presented by others that the ALV plot is related to scene-object size by a factor of 1/2-3/4.

  19. Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.

    PubMed

    Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E

    2014-02-28

    The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.

  20. 7 CFR 51.1406 - Sample for grade or size determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., AND STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Sample for Grade Or Size Determination § 51.1406 Sample for grade or size determination. Each sample shall consist of 100 pecans. The...

  1. COMPARATIVE TOXICITY OF SIZE FRACTIONATED AIRBORNE PARTICULATE MATTER OBTAINED FROM DIFFERENT CITIES IN THE USA

    EPA Science Inventory

    This paper is the result of a collaboration to assess effects of size fractionated PM from different locations on murine pulmonary inflammatory responses. In the course of this, they also determined the chemical makeup of each of the samples.

  2. A comparison of two nano-sized particle air filtration tests in the diameter range of 10 to 400 nanometers

    NASA Astrophysics Data System (ADS)

    Japuntich, Daniel A.; Franklin, Luke M.; Pui, David Y.; Kuehn, Thomas H.; Kim, Seong Chan; Viner, Andrew S.

    2007-01-01

    Two different air filter test methodologies are discussed and compared for challenges in the nano-sized particle range of 10-400 nm. Included in the discussion are test procedure development, factors affecting variability and comparisons between results from the tests. One test system which gives a discrete penetration for a given particle size is the TSI 8160 Automated Filter tester (updated and commercially available now as the TSI 3160) manufactured by the TSI, Inc., Shoreview, MN. Another filter test system was developed utilizing a Scanning Mobility Particle Sizer (SMPS) to sample the particle size distributions downstream and upstream of an air filter to obtain a continuous percent filter penetration versus particle size curve. Filtration test results are shown for fiberglass filter paper of intermediate filtration efficiency. Test variables affecting the results of the TSI 8160 for NaCl and dioctyl phthalate (DOP) particles are discussed, including condensation particle counter stability and the sizing of the selected particle challenges. Filter testing using a TSI 3936 SMPS sampling upstream and downstream of a filter is also shown with a discussion of test variables and the need for proper SMPS volume purging and filter penetration correction procedure. For both tests, the penetration versus particle size curves for the filter media studied follow the theoretical Brownian capture model of decreasing penetration with decreasing particle diameter down to 10 nm with no deviation. From these findings, the authors can say with reasonable confidence that there is no evidence of particle thermal rebound in the size range.

  3. 4D Imaging of Salt Precipitation during Evaporation from Saline Porous Media Influenced by the Particle Size Distribution

    NASA Astrophysics Data System (ADS)

    Norouzi Rad, M.; Shokri, N.

    2014-12-01

    Understanding the physics of water evaporation from saline porous media is important in many processes such as evaporation from porous media, vegetation, plant growth, biodiversity in soil, and durability of building materials. To investigate the effect of particle size distribution on the dynamics of salt precipitation in saline porous media during evaporation, we applied X-ray micro-tomography technique. Six samples of quartz sand with different grain size distributions were used in the present study enabling us to constrain the effects of particle and pore sizes on salt precipitation patterns and dynamics. The pore size distributions were computed using the pore-scale X-ray images. The packed beds were saturated with NaCl solution of 3 Molal and the X-ray imaging was continued for one day with temporal resolution of 30 min resulting in pore scale information about the evaporation and precipitation dynamics. Our results show more precipitation at the early stage of the evaporation in the case of sand with the larger particle size due to the presence of fewer evaporation sites at the surface. The presence of more preferential evaporation sites at the surface of finer sands significantly modified the patterns and thickness of the salt crust deposited on the surface such that a thinner salt crust was formed in the case of sand with smaller particle size covering larger area at the surface as opposed to the thicker patchy crusts in samples with larger particle sizes. Our results provide new insights regarding the physics of salt precipitation in porous media during evaporation.

  4. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    PubMed

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  5. Investigating Test Equating Methods in Small Samples through Various Factors

    ERIC Educational Resources Information Center

    Asiret, Semih; Sünbül, Seçil Ömür

    2016-01-01

    In this study, equating methods for random group design using small samples through factors such as sample size, difference in difficulty between forms, and guessing parameter was aimed for comparison. Moreover, which method gives better results under which conditions was also investigated. In this study, 5,000 dichotomous simulated data…

  6. Influence of androgen receptor repeat polymorphisms on personality traits in men

    PubMed Central

    Westberg, Lars; Henningsson, Susanne; Landén, Mikael; Annerbrink, Kristina; Melke, Jonas; Nilsson, Staffan; Rosmond, Roland; Holm, Göran; Anckarsäter, Henrik; Eriksson, Elias

    2009-01-01

    Background Testosterone has been attributed importance for various aspects of behaviour. The aim of our study was to investigate the potential influence of 2 functional polymorphisms in the amino terminal of the androgen receptor on personality traits in men. Methods We assessed and genotyped 141 men born in 1944 recruited from the general population. We used 2 different instruments: the Karolinska Scales of Personality and the Temperament and Character Inventory. For replication, we similarly assessed 63 men recruited from a forensic psychiatry study group. Results In the population-recruited sample, the lengths of the androgen receptor repeats were associated with neuroticism, extraversion and self-transcendence. The association with extraversion was replicated in the independent sample. Limitations Our 2 samples differed in size; sample 1 was of moderate size and sample 2 was small. In addition, the homogeneity of sample 1 probably enhanced our ability to detect significant associations between genotype and phenotype. Conclusion Our results suggest that the repeat polymorphisms in the androgen receptor gene may influence personality traits in men. PMID:19448851

  7. An analysis of respondent driven sampling with Injection Drug Users (IDU) in Albania and the Russian Federation.

    PubMed

    Stormer, Ame; Tun, Waimar; Guli, Lisa; Harxhi, Arjan; Bodanovskaia, Zinaida; Yakovleva, Anna; Rusakova, Maia; Levina, Olga; Bani, Roland; Rjepaj, Klodian; Bino, Silva

    2006-11-01

    Injection drug users in Tirana, Albania and St. Petersburg, Russia were recruited into a study assessing HIV-related behaviors and HIV serostatus using Respondent Driven Sampling (RDS), a peer-driven recruitment sampling strategy that results in a probability sample. (Salganik M, Heckathorn DD. Sampling and estimation in hidden populations using respondent-driven sampling. Sociol Method. 2004;34:193-239). This paper presents a comparison of RDS implementation, findings on network and recruitment characteristics, and lessons learned. Initiated with 13 to 15 seeds, approximately 200 IDUs were recruited within 8 weeks. Information resulting from RDS indicates that social network patterns from the two studies differ greatly. Female IDUs in Tirana had smaller network sizes than male IDUs, unlike in St. Petersburg where female IDUs had larger network sizes than male IDUs. Recruitment patterns in each country also differed by demographic categories. Recruitment analyses indicate that IDUs form socially distinct groups by sex in Tirana, whereas there was a greater degree of gender mixing patterns in St. Petersburg. RDS proved to be an effective means of surveying these hard-to-reach populations.

  8. Distribution of the two-sample t-test statistic following blinded sample size re-estimation.

    PubMed

    Lu, Kaifeng

    2016-05-01

    We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. ENHANCEMENT OF LEARNING ON SAMPLE SIZE CALCULATION WITH A SMARTPHONE APPLICATION: A CLUSTER-RANDOMIZED CONTROLLED TRIAL.

    PubMed

    Ngamjarus, Chetta; Chongsuvivatwong, Virasakdi; McNeil, Edward; Holling, Heinz

    2017-01-01

    Sample size determination usually is taught based on theory and is difficult to understand. Using a smartphone application to teach sample size calculation ought to be more attractive to students than using lectures only. This study compared levels of understanding of sample size calculations for research studies between participants attending a lecture only versus lecture combined with using a smartphone application to calculate sample sizes, to explore factors affecting level of post-test score after training sample size calculation, and to investigate participants’ attitude toward a sample size application. A cluster-randomized controlled trial involving a number of health institutes in Thailand was carried out from October 2014 to March 2015. A total of 673 professional participants were enrolled and randomly allocated to one of two groups, namely, 341 participants in 10 workshops to control group and 332 participants in 9 workshops to intervention group. Lectures on sample size calculation were given in the control group, while lectures using a smartphone application were supplied to the test group. Participants in the intervention group had better learning of sample size calculation (2.7 points out of maximnum 10 points, 95% CI: 24 - 2.9) than the participants in the control group (1.6 points, 95% CI: 1.4 - 1.8). Participants doing research projects had a higher post-test score than those who did not have a plan to conduct research projects (0.9 point, 95% CI: 0.5 - 1.4). The majority of the participants had a positive attitude towards the use of smartphone application for learning sample size calculation.

  10. Measuring the molecular dimensions of wine tannins: comparison of small-angle X-ray scattering, gel-permeation chromatography and mean degree of polymerization.

    PubMed

    McRae, Jacqui M; Kirby, Nigel; Mertens, Haydyn D T; Kassara, Stella; Smith, Paul A

    2014-07-23

    The molecular size of wine tannins can influence astringency, and yet it has been unclear as to whether the standard methods for determining average tannin molecular weight (MW), including gel-permeation chromatography (GPC) and depolymerization reactions, are actually related to the size of the tannin in wine-like conditions. Small-angle X-ray scattering (SAXS) was therefore used to determine the molecular sizes and corresponding MWs of wine tannin samples from 3 and 7 year old Cabernet Sauvignon wine in a variety of wine-like matrixes: 5-15% and 100% ethanol; 0-200 mM NaCl and pH 3.0-4.0, and compared to those measured using the standard methods. The SAXS results indicated that the tannin samples from the older wine were larger than those of the younger wine and that wine composition did not greatly impact on tannin molecular size. The average tannin MWs as determined by GPC correlated strongly with the SAXS results, suggesting that this method does give a good indication of tannin molecular size in wine-like conditions. The MW as determined from the depolymerization reactions did not correlate as strongly with the SAXS results. To our knowledge, SAXS measurements have not previously been attempted for wine tannins.

  11. Probabilistic Design of a Mars Sample Return Earth Entry Vehicle Thermal Protection System

    NASA Technical Reports Server (NTRS)

    Dec, John A.; Mitcheltree, Robert A.

    2002-01-01

    The driving requirement for design of a Mars Sample Return mission is to assure containment of the returned samples. Designing to, and demonstrating compliance with, such a requirement requires physics based tools that establish the relationship between engineer's sizing margins and probabilities of failure. The traditional method of determining margins on ablative thermal protection systems, while conservative, provides little insight into the actual probability of an over-temperature during flight. The objective of this paper is to describe a new methodology for establishing margins on sizing the thermal protection system (TPS). Results of this Monte Carlo approach are compared with traditional methods.

  12. Pelvic dimorphism in relation to body size and body size dimorphism in humans.

    PubMed

    Kurki, Helen K

    2011-12-01

    Many mammalian species display sexual dimorphism in the pelvis, where females possess larger dimensions of the obstetric (pelvic) canal than males. This is contrary to the general pattern of body size dimorphism, where males are larger than females. Pelvic dimorphism is often attributed to selection relating to parturition, or as a developmental consequence of secondary sexual differentiation (different allometric growth trajectories of each sex). Among anthropoid primates, species with higher body size dimorphism have higher pelvic dimorphism (in converse directions), which is consistent with an explanation of differential growth trajectories for pelvic dimorphism. This study investigates whether the pattern holds intraspecifically in humans by asking: Do human populations with high body size dimorphism also display high pelvic dimorphism? Previous research demonstrated that in some small-bodied populations, relative pelvic canal size can be larger than in large-bodied populations, while others have suggested that larger-bodied human populations display greater body size dimorphism. Eleven human skeletal samples (total N: male = 229, female = 208) were utilized, representing a range of body sizes and geographical regions. Skeletal measurements of the pelvis and femur were collected and indices of sexual dimorphism for the pelvis and femur were calculated for each sample [ln(M/F)]. Linear regression was used to examine the relationships between indices of pelvic and femoral size dimorphism, and between pelvic dimorphism and female femoral size. Contrary to expectations, the results suggest that pelvic dimorphism in humans is generally not correlated with body size dimorphism or female body size. These results indicate that divergent patterns of dimorphism exist for the pelvis and body size in humans. Implications for the evaluation of the evolution of pelvic dimorphism and rotational childbirth in Homo are considered. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  14. Sample Size Determination for Regression Models Using Monte Carlo Methods in R

    ERIC Educational Resources Information Center

    Beaujean, A. Alexander

    2014-01-01

    A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…

  15. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    PubMed

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Structural transformation of crystallized debranched cassava starch during dual hydrothermal treatment in relation to enzyme digestibility.

    PubMed

    Boonna, Sureeporn; Tongta, Sunanta

    2018-07-01

    Structural transformation of crystallized debranched cassava starch prepared by temperature cycling (TC) treatment and then subjected to annealing (ANN), heat-moisture treatment (HMT) and dual hydrothermal treatments of ANN and HMT was investigated. The relative crystallinity, lateral crystal size, melting temperature and resistant starch (RS) content increased for all hydrothermally treated samples, but the slowly digestible starch (SDS) content decreased. The RS content followed the order: HMT → ANN > HMT > ANN → HMT > ANN > TC, respectively. The HMT → ANN sample showed a larger lateral crystal size with more homogeneity, whereas the ANN → HMT sample had a smaller lateral crystal size with a higher melting temperature. After cooking at 50% moisture, the increased RS content of samples was observed, particularly for the ANN → HMT sample. These results suggest that structural changes of crystallized debranched starch during hydrothermal treatments depend on initial crystalline characteristics and treatment sequences, influencing thermal stability, enzyme digestibility, and cooking stability. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Comparison of Random Forest, k-Nearest Neighbor, and Support Vector Machine Classifiers for Land Cover Classification Using Sentinel-2 Imagery

    PubMed Central

    Thanh Noi, Phan; Kappas, Martin

    2017-01-01

    In previous classification studies, three non-parametric classifiers, Random Forest (RF), k-Nearest Neighbor (kNN), and Support Vector Machine (SVM), were reported as the foremost classifiers at producing high accuracies. However, only a few studies have compared the performances of these classifiers with different training sample sizes for the same remote sensing images, particularly the Sentinel-2 Multispectral Imager (MSI). In this study, we examined and compared the performances of the RF, kNN, and SVM classifiers for land use/cover classification using Sentinel-2 image data. An area of 30 × 30 km2 within the Red River Delta of Vietnam with six land use/cover types was classified using 14 different training sample sizes, including balanced and imbalanced, from 50 to over 1250 pixels/class. All classification results showed a high overall accuracy (OA) ranging from 90% to 95%. Among the three classifiers and 14 sub-datasets, SVM produced the highest OA with the least sensitivity to the training sample sizes, followed consecutively by RF and kNN. In relation to the sample size, all three classifiers showed a similar and high OA (over 93.85%) when the training sample size was large enough, i.e., greater than 750 pixels/class or representing an area of approximately 0.25% of the total study area. The high accuracy was achieved with both imbalanced and balanced datasets. PMID:29271909

  18. Comparison of Random Forest, k-Nearest Neighbor, and Support Vector Machine Classifiers for Land Cover Classification Using Sentinel-2 Imagery.

    PubMed

    Thanh Noi, Phan; Kappas, Martin

    2017-12-22

    In previous classification studies, three non-parametric classifiers, Random Forest (RF), k-Nearest Neighbor (kNN), and Support Vector Machine (SVM), were reported as the foremost classifiers at producing high accuracies. However, only a few studies have compared the performances of these classifiers with different training sample sizes for the same remote sensing images, particularly the Sentinel-2 Multispectral Imager (MSI). In this study, we examined and compared the performances of the RF, kNN, and SVM classifiers for land use/cover classification using Sentinel-2 image data. An area of 30 × 30 km² within the Red River Delta of Vietnam with six land use/cover types was classified using 14 different training sample sizes, including balanced and imbalanced, from 50 to over 1250 pixels/class. All classification results showed a high overall accuracy (OA) ranging from 90% to 95%. Among the three classifiers and 14 sub-datasets, SVM produced the highest OA with the least sensitivity to the training sample sizes, followed consecutively by RF and kNN. In relation to the sample size, all three classifiers showed a similar and high OA (over 93.85%) when the training sample size was large enough, i.e., greater than 750 pixels/class or representing an area of approximately 0.25% of the total study area. The high accuracy was achieved with both imbalanced and balanced datasets.

  19. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  20. Sampling of illicit drugs for quantitative analysis--part II. Study of particle size and its influence on mass reduction.

    PubMed

    Bovens, M; Csesztregi, T; Franc, A; Nagy, J; Dujourdy, L

    2014-01-01

    The basic goal in sampling for the quantitative analysis of illicit drugs is to maintain the average concentration of the drug in the material from its original seized state (the primary sample) all the way through to the analytical sample, where the effect of particle size is most critical. The size of the largest particles of different authentic illicit drug materials, in their original state and after homogenisation, using manual or mechanical procedures, was measured using a microscope with a camera attachment. The comminution methods employed included pestle and mortar (manual) and various ball and knife mills (mechanical). The drugs investigated were amphetamine, heroin, cocaine and herbal cannabis. It was shown that comminution of illicit drug materials using these techniques reduces the nominal particle size from approximately 600 μm down to between 200 and 300 μm. It was demonstrated that the choice of 1 g increments for the primary samples of powdered drugs and cannabis resin, which were used in the heterogeneity part of our study (Part I) was correct for the routine quantitative analysis of illicit seized drugs. For herbal cannabis we found that the appropriate increment size was larger. Based on the results of this study we can generally state that: An analytical sample weight of between 20 and 35 mg of an illicit powdered drug, with an assumed purity of 5% or higher, would be considered appropriate and would generate an RSDsampling in the same region as the RSDanalysis for a typical quantitative method of analysis for the most common, powdered, illicit drugs. For herbal cannabis, with an assumed purity of 1% THC (tetrahydrocannabinol) or higher, an analytical sample weight of approximately 200 mg would be appropriate. In Part III we will pull together our homogeneity studies and particle size investigations and use them to devise sampling plans and sample preparations suitable for the quantitative instrumental analysis of the most common illicit drugs. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  1. Inference and sample size calculation for clinical trials with incomplete observations of paired binary outcomes.

    PubMed

    Zhang, Song; Cao, Jing; Ahn, Chul

    2017-02-20

    We investigate the estimation of intervention effect and sample size determination for experiments where subjects are supposed to contribute paired binary outcomes with some incomplete observations. We propose a hybrid estimator to appropriately account for the mixed nature of observed data: paired outcomes from those who contribute complete pairs of observations and unpaired outcomes from those who contribute either pre-intervention or post-intervention outcomes. We theoretically prove that if incomplete data are evenly distributed between the pre-intervention and post-intervention periods, the proposed estimator will always be more efficient than the traditional estimator. A numerical research shows that when the distribution of incomplete data is unbalanced, the proposed estimator will be superior when there is moderate-to-strong positive within-subject correlation. We further derive a closed-form sample size formula to help researchers determine how many subjects need to be enrolled in such studies. Simulation results suggest that the calculated sample size maintains the empirical power and type I error under various design configurations. We demonstrate the proposed method using a real application example. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. High-concentration zeta potential measurements using light-scattering techniques

    PubMed Central

    Kaszuba, Michael; Corbett, Jason; Watson, Fraser Mcneil; Jones, Andrew

    2010-01-01

    Zeta potential is the key parameter that controls electrostatic interactions in particle dispersions. Laser Doppler electrophoresis is an accepted method for the measurement of particle electrophoretic mobility and hence zeta potential of dispersions of colloidal size materials. Traditionally, samples measured by this technique have to be optically transparent. Therefore, depending upon the size and optical properties of the particles, many samples will be too concentrated and will require dilution. The ability to measure samples at or close to their neat concentration would be desirable as it would minimize any changes in the zeta potential of the sample owing to dilution. However, the ability to measure turbid samples using light-scattering techniques presents a number of challenges. This paper discusses electrophoretic mobility measurements made on turbid samples at high concentration using a novel cell with reduced path length. Results are presented on two different sample types, titanium dioxide and a polyurethane dispersion, as a function of sample concentration. For both of the sample types studied, the electrophoretic mobility results show a gradual decrease as the sample concentration increases and the possible reasons for these observations are discussed. Further, a comparison of the data against theoretical models is presented and discussed. Conclusions and recommendations are made from the zeta potential values obtained at high concentrations. PMID:20732896

  3. No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.

    PubMed

    van Smeden, Maarten; de Groot, Joris A H; Moons, Karel G M; Collins, Gary S; Altman, Douglas G; Eijkemans, Marinus J C; Reitsma, Johannes B

    2016-11-24

    Ten events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies. The current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth's correction, are compared. The results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect ('separation'). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth's correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation. The current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.

  4. Sample size determination in group-sequential clinical trials with two co-primary endpoints

    PubMed Central

    Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi

    2014-01-01

    We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799

  5. Sizing for the apparel industry using statistical analysis - a Brazilian case study

    NASA Astrophysics Data System (ADS)

    Capelassi, C. H.; Carvalho, M. A.; El Kattel, C.; Xu, B.

    2017-10-01

    The study of the body measurements of Brazilian women used the Kinect Body Imaging system for 3D body scanning. The result of the study aims to meet the needs of the apparel industry for accurate measurements. Data was statistically treated using the IBM SPSS 23 system, with 95% confidence (P<0,05) for the inferential analysis, with the purpose of grouping the measurements in sizes, so that a smaller number of sizes can cover a greater number of people. The sample consisted of 101 volunteers aged between 19 and 62 years. A cluster analysis was performed to identify the main body shapes of the sample. The results were divided between the top and bottom body portions; For the top portion, were used the measurements of the abdomen, waist and bust circumferences, as well as the height; For the bottom portion, were used the measurements of the hip circumference and the height. Three sizing systems were developed for the researched sample from the Abdomen-to-Height Ratio - AHR (top portion): Small (AHR < 0,52), Medium (AHR: 0,52-0,58), Large (AHR > 0,58) and from the Hip-to-Height Ratio - HHR (bottom portion): Small (HHR < 0,62), Medium (HHR: 0,62-0,68), Large (HHR > 0,68).

  6. Reference interval computation: which method (not) to choose?

    PubMed

    Pavlov, Igor Y; Wilson, Andrew R; Delgado, Julio C

    2012-07-11

    When different methods are applied to reference interval (RI) calculation the results can sometimes be substantially different, especially for small reference groups. If there are no reliable RI data available, there is no way to confirm which method generates results closest to the true RI. We randomly drawn samples obtained from a public database for 33 markers. For each sample, RIs were calculated by bootstrapping, parametric, and Box-Cox transformed parametric methods. Results were compared to the values of the population RI. For approximately half of the 33 markers, results of all 3 methods were within 3% of the true reference value. For other markers, parametric results were either unavailable or deviated considerably from the true values. The transformed parametric method was more accurate than bootstrapping for sample size of 60, very close to bootstrapping for sample size 120, but in some cases unavailable. We recommend against using parametric calculations to determine RIs. The transformed parametric method utilizing Box-Cox transformation would be preferable way of RI calculation, if it satisfies normality test. If not, the bootstrapping is always available, and is almost as accurate and precise as the transformed parametric method. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. What is a species? A new universal method to measure differentiation and assess the taxonomic rank of allopatric populations, using continuous variables

    PubMed Central

    Donegan, Thomas M.

    2018-01-01

    Abstract Existing models for assigning species, subspecies, or no taxonomic rank to populations which are geographically separated from one another were analyzed. This was done by subjecting over 3,000 pairwise comparisons of vocal or biometric data based on birds to a variety of statistical tests that have been proposed as measures of differentiation. One current model which aims to test diagnosability (Isler et al. 1998) is highly conservative, applying a hard cut-off, which excludes from consideration differentiation below diagnosis. It also includes non-overlap as a requirement, a measure which penalizes increases to sample size. The “species scoring” model of Tobias et al. (2010) involves less drastic cut-offs, but unlike Isler et al. (1998), does not control adequately for sample size and attributes scores in many cases to differentiation which is not statistically significant. Four different models of assessing effect sizes were analyzed: using both pooled and unpooled standard deviations and controlling for sample size using t-distributions or omitting to do so. Pooled standard deviations produced more conservative effect sizes when uncontrolled for sample size but less conservative effect sizes when so controlled. Pooled models require assumptions to be made that are typically elusive or unsupported for taxonomic studies. Modifications to improving these frameworks are proposed, including: (i) introducing statistical significance as a gateway to attributing any weighting to findings of differentiation; (ii) abandoning non-overlap as a test; (iii) recalibrating Tobias et al. (2010) scores based on effect sizes controlled for sample size using t-distributions. A new universal method is proposed for measuring differentiation in taxonomy using continuous variables and a formula is proposed for ranking allopatric populations. This is based first on calculating effect sizes using unpooled standard deviations, controlled for sample size using t-distributions, for a series of different variables. All non-significant results are excluded by scoring them as zero. Distance between any two populations is calculated using Euclidian summation of non-zeroed effect size scores. If the score of an allopatric pair exceeds that of a related sympatric pair, then the allopatric population can be ranked as species and, if not, then at most subspecies rank should be assigned. A spreadsheet has been programmed and is being made available which allows this and other tests of differentiation and rank studied in this paper to be rapidly analyzed. PMID:29780266

  8. Confidence crisis of results in biomechanics research.

    PubMed

    Knudson, Duane

    2017-11-01

    Many biomechanics studies have small sample sizes and incorrect statistical analyses, so reporting of inaccurate inferences and inflated magnitude of effects are common in the field. This review examines these issues in biomechanics research and summarises potential solutions from research in other fields to increase the confidence in the experimental effects reported in biomechanics. Authors, reviewers and editors of biomechanics research reports are encouraged to improve sample sizes and the resulting statistical power, improve reporting transparency, improve the rigour of statistical analyses used, and increase the acceptance of replication studies to improve the validity of inferences from data in biomechanics research. The application of sports biomechanics research results would also improve if a larger percentage of unbiased effects and their uncertainty were reported in the literature.

  9. Hard choices in assessing survival past dams — a comparison of single- and paired-release strategies

    USGS Publications Warehouse

    Zydlewski, Joseph D.; Stich, Daniel S.; Sigourney, Douglas B.

    2017-01-01

    Mark–recapture models are widely used to estimate survival of salmon smolts migrating past dams. Paired releases have been used to improve estimate accuracy by removing components of mortality not attributable to the dam. This method is accompanied by reduced precision because (i) sample size is reduced relative to a single, large release; and (ii) variance calculations inflate error. We modeled an idealized system with a single dam to assess trade-offs between accuracy and precision and compared methods using root mean squared error (RMSE). Simulations were run under predefined conditions (dam mortality, background mortality, detection probability, and sample size) to determine scenarios when the paired release was preferable to a single release. We demonstrate that a paired-release design provides a theoretical advantage over a single-release design only at large sample sizes and high probabilities of detection. At release numbers typical of many survival studies, paired release can result in overestimation of dam survival. Failures to meet model assumptions of a paired release may result in further overestimation of dam-related survival. Under most conditions, a single-release strategy was preferable.

  10. The late Neandertal supraorbital fossils from Vindija Cave, Croatia: a biased sample?

    PubMed

    Ahern, James C M; Lee, Sang-Hee; Hawks, John D

    2002-09-01

    The late Neandertal sample from Vindija (Croatia) has been described as transitional between the earlier Central European Neandertals from Krapina (Croatia) and modern humans. However, the morphological differences indicating this transition may rather be the result of different sex and/or age compositions between the samples. This study tests the hypothesis that the metric differences between the Krapina and Vindija supraorbital samples are due to sampling bias. We focus upon the supraorbital region because past studies have posited this region as particularly indicative of the Vindija sample's transitional nature. Furthermore, the supraorbital region varies significantly with both age and sex. We analyzed four chords and two derived indices of supraorbital torus form as defined by Smith & Ranyard (1980, Am. J. phys. Anthrop.93, pp. 589-610). For each variable, we analyzed relative sample bias of the Krapina and Vindija samples using three sampling methods. In order to test the hypothesis that the Vindija sample contains an over-representation of females and/or young while the Krapina sample is normal or also female/young biased, we determined the probability of drawing a sample of the same size as and with a mean equal to or less than Vindija's from a Krapina-based population. In order to test the hypothesis that the Vindija sample is female/young biased while the Krapina sample is male/old biased, we determined the probability of drawing a sample of the same size as and with a mean equal or less than Vindija's from a generated population whose mean is halfway between Krapina's and Vindija's. Finally, in order to test the hypothesis that the Vindija sample is normal while the Krapina sample contains an over-representation of males and/or old, we determined the probability of drawing a sample of the same size as and with a mean equal to or greater than Krapina's from a Vindija-based population. Unless we assume that the Vindija sample is female/young and the Krapina sample is male/old biased, our results falsify the hypothesis that the metric differences between the Krapina and Vindija samples are due to sample bias.

  11. Sample size guidelines for fitting a lognormal probability distribution to censored most probable number data with a Markov chain Monte Carlo method.

    PubMed

    Williams, Michael S; Cao, Yong; Ebel, Eric D

    2013-07-15

    Levels of pathogenic organisms in food and water have steadily declined in many parts of the world. A consequence of this reduction is that the proportion of samples that test positive for the most contaminated product-pathogen pairings has fallen to less than 0.1. While this is unequivocally beneficial to public health, datasets with very few enumerated samples present an analytical challenge because a large proportion of the observations are censored values. One application of particular interest to risk assessors is the fitting of a statistical distribution function to datasets collected at some point in the farm-to-table continuum. The fitted distribution forms an important component of an exposure assessment. A number of studies have compared different fitting methods and proposed lower limits on the proportion of samples where the organisms of interest are identified and enumerated, with the recommended lower limit of enumerated samples being 0.2. This recommendation may not be applicable to food safety risk assessments for a number of reasons, which include the development of new Bayesian fitting methods, the use of highly sensitive screening tests, and the generally larger sample sizes found in surveys of food commodities. This study evaluates the performance of a Markov chain Monte Carlo fitting method when used in conjunction with a screening test and enumeration of positive samples by the Most Probable Number technique. The results suggest that levels of contamination for common product-pathogen pairs, such as Salmonella on poultry carcasses, can be reliably estimated with the proposed fitting method and samples sizes in excess of 500 observations. The results do, however, demonstrate that simple guidelines for this application, such as the proportion of positive samples, cannot be provided. Published by Elsevier B.V.

  12. Discrete element method (DEM) simulations of stratified sampling during solid dosage form manufacturing.

    PubMed

    Hancock, Bruno C; Ketterhagen, William R

    2011-10-14

    Discrete element model (DEM) simulations of the discharge of powders from hoppers under gravity were analyzed to provide estimates of dosage form content uniformity during the manufacture of solid dosage forms (tablets and capsules). For a system that exhibits moderate segregation the effects of sample size, number, and location within the batch were determined. The various sampling approaches were compared to current best-practices for sampling described in the Product Quality Research Institute (PQRI) Blend Uniformity Working Group (BUWG) guidelines. Sampling uniformly across the discharge process gave the most accurate results with respect to identifying segregation trends. Sigmoidal sampling (as recommended in the PQRI BUWG guidelines) tended to overestimate potential segregation issues, whereas truncated sampling (common in industrial practice) tended to underestimate them. The size of the sample had a major effect on the absolute potency RSD. The number of sampling locations (10 vs. 20) had very little effect on the trends in the data, and the number of samples analyzed at each location (1 vs. 3 vs. 7) had only a small effect for the sampling conditions examined. The results of this work provide greater understanding of the effect of different sampling approaches on the measured content uniformity of real dosage forms, and can help to guide the choice of appropriate sampling protocols. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Evolution of deep-bed filtration of engine exhaust particulates with trapped mass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Viswanathan, Sandeep; Rothamer, David A.; Foster, David E.

    Micro-scale filtration experiments were performed on cordierite filter samples using particulate matter (PM) generated by a spark-ignition direct-injection (SIDI) engine fueled with tier II EEE certification gasoline. Size-resolved mass and number concentrations were obtained from several engine operating conditions. The resultant mass-mobility relationships showed weak dependence on the operating condition. An integrated particle size distribution (IPSD) method was used estimate the PM mass concentration in the exhaust stream from the SIDI engine and a heavy duty diesel (HDD) engine. The average estimated mass concentration between all conditions was ~77****** % of the gravimetric measurements performed on Teflon filters. Despite themore » relatively low elemental carbon fraction (~0.4 to 0.7), the IPSD mass for stoichiometric SIDI exhaust was ~83±38 % of the gravimetric measurement. Identical cordierite filter samples with properties representative of diesel particulate filters were sequentially loaded with PM from the different SIDI engine operating conditions, in order of increasing PM mass concentration. Simultaneous particle size distribution measurements upstream and downstream of the filter sample were used to evaluate filter performance evolution and the instantaneous trapped mass within the filter for two different filter face velocities. The evolution of filtration performance for the different samples was sensitive only to trapped mass, despite using PM from a wide range of operating conditions. Higher filtration velocity resulted in a more rapid shift in the most penetrating particle size towards smaller mobility diameters.« less

  14. Effects of grain size, mineralogy, and acid-extractable grain coatings on the distribution of the fallout radionuclides 7Be, 10Be, 137Cs, and 210Pb in river sediment

    NASA Astrophysics Data System (ADS)

    Singleton, Adrian A.; Schmidt, Amanda H.; Bierman, Paul R.; Rood, Dylan H.; Neilson, Thomas B.; Greene, Emily Sophie; Bower, Jennifer A.; Perdrial, Nicolas

    2017-01-01

    Grain-size dependencies in fallout radionuclide activity have been attributed to either increase in specific surface area in finer grain sizes or differing mineralogical abundances in different grain sizes. Here, we consider a third possibility, that the concentration and composition of grain coatings, where fallout radionuclides reside, controls their activity in fluvial sediment. We evaluated these three possible explanations in two experiments: (1) we examined the effect of sediment grain size, mineralogy, and composition of the acid-extractable materials on the distribution of 7Be, 10Be, 137Cs, and unsupported 210Pb in detrital sediment samples collected from rivers in China and the United States, and (2) we periodically monitored 7Be, 137Cs, and 210Pb retention in samples of known composition exposed to natural fallout in Ohio, USA for 294 days. Acid-extractable materials (made up predominately of Fe, Mn, Al, and Ca from secondary minerals and grain coatings produced during pedogenesis) are positively related to the abundance of fallout radionuclides in our sediment samples. Grain-size dependency of fallout radionuclide concentrations was significant in detrital sediment samples, but not in samples exposed to fallout under controlled conditions. Mineralogy had a large effect on 7Be and 210Pb retention in samples exposed to fallout, suggesting that sieving sediments to a single grain size or using specific surface area-based correction terms may not completely control for preferential distribution of these nuclides. We conclude that time-dependent geochemical, pedogenic, and sedimentary processes together result in the observed differences in nuclide distribution between different grain sizes and substrate compositions. These findings likely explain variability of measured nuclide activities in river networks that exceeds the variability introduced by analytical techniques as well as spatial and temporal differences in erosion rates and processes. In short, we suggest that presence and amount of pedogenic grain coatings is more important than either specific surface area or surface charge in setting the distribution of fallout radionuclides.

  15. Sample Size Requirements for Studies of Treatment Effects on Beta-Cell Function in Newly Diagnosed Type 1 Diabetes

    PubMed Central

    Lachin, John M.; McGee, Paula L.; Greenbaum, Carla J.; Palmer, Jerry; Gottlieb, Peter; Skyler, Jay

    2011-01-01

    Preservation of -cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(), log(+1) and square-root transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8–12 years of age, adolescents (13–17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13–17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(+1) and transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately evaluate the sample size for studies of new agents to preserve C-peptide levels in newly diagnosed type 1 diabetes. PMID:22102862

  16. Sample size requirements for studies of treatment effects on beta-cell function in newly diagnosed type 1 diabetes.

    PubMed

    Lachin, John M; McGee, Paula L; Greenbaum, Carla J; Palmer, Jerry; Pescovitz, Mark D; Gottlieb, Peter; Skyler, Jay

    2011-01-01

    Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(x), log(x+1) and square-root (√x) transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1) and √x transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately evaluate the sample size for studies of new agents to preserve C-peptide levels in newly diagnosed type 1 diabetes.

  17. Assessing grain-size correspondence between flow and deposits of controlled floods in the Colorado River, USA

    USGS Publications Warehouse

    Draut, Amy; Rubin, David M.

    2013-01-01

    Flood-deposited sediment has been used to decipher environmental parameters such as variability in watershed sediment supply, paleoflood hydrology, and channel morphology. It is not well known, however, how accurately the deposits reflect sedimentary processes within the flow, and hence what sampling intensity is needed to decipher records of recent or long-past conditions. We examine these problems using deposits from dam-regulated floods in the Colorado River corridor through Marble Canyon–Grand Canyon, Arizona, U.S.A., in which steady-peaked floods represent a simple end-member case. For these simple floods, most deposits show inverse grading that reflects coarsening suspended sediment (a result of fine-sediment-supply limitation), but there is enough eddy-scale variability that some profiles show normal grading that did not reflect grain-size evolution in the flow as a whole. To infer systemwide grain-size evolution in modern or ancient depositional systems requires sampling enough deposit profiles that the standard error of the mean of grain-size-change measurements becomes small relative to the magnitude of observed changes. For simple, steady-peaked floods, 5–10 profiles or fewer may suffice to characterize grain-size trends robustly, but many more samples may be needed from deposits with greater variability in their grain-size evolution.

  18. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization.

    PubMed

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between logistical constraints and additional sampling performance should be carefully evaluated.

  19. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    PubMed Central

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between logistical constraints and additional sampling performance should be carefully evaluated. PMID:28334046

  20. Effects of Calibration Sample Size and Item Bank Size on Ability Estimation in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Sahin, Alper; Weiss, David J.

    2015-01-01

    This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…

  1. Physicochemical properties of respirable-size lunar dust

    NASA Astrophysics Data System (ADS)

    McKay, D. S.; Cooper, B. L.; Taylor, L. A.; James, J. T.; Thomas-Keprta, K.; Pieters, C. M.; Wentworth, S. J.; Wallace, W. T.; Lee, T. S.

    2015-02-01

    We separated the respirable dust and other size fractions from Apollo 14 bulk sample 14003,96 in a dry nitrogen environment. While our toxicology team performed in vivo and in vitro experiments with the respirable fraction, we studied the size distribution and shape, chemistry, mineralogy, spectroscopy, iron content and magnetic resonance of various size fractions. These represent the finest-grained lunar samples ever measured for either FMR np-Fe0 index or precise bulk chemistry, and are the first instance we know of in which SEM/TEM samples have been obtained without using liquids. The concentration of single-domain, nanophase metallic iron (np-Fe0) increases as particle size diminishes to 2 μm, confirming previous extrapolations. Size-distribution studies disclosed that the most frequent particle size was in the 0.1-0.2 μm range suggesting a relatively high surface area and therefore higher potential toxicity. Lunar dust particles are insoluble in isopropanol but slightly soluble in distilled water (~0.2 wt%/3 days). The interaction between water and lunar fines, which results in both agglomeration and partial dissolution, is observable on a macro scale over time periods of less than an hour. Most of the respirable grains were smooth amorphous glass. This suggests less toxicity than if the grains were irregular, porous, or jagged, and may account for the fact that lunar dust is less toxic than ground quartz.

  2. Sample size calculations for case-control studies

    Cancer.gov

    This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.

  3. Microstructural and mechanical evolution during deformation and annealing of poly-phase marbles - constraints from laboratory experiments and field observations

    NASA Astrophysics Data System (ADS)

    Austin, N. J.; Evans, B.; Dresen, G. H.; Rybacki, E.

    2009-12-01

    Deformed rocks commonly consist of several mineral phases, each with dramatically different mechanical properties. In both naturally and experimentally deformed rocks, deformation mechanisms and, in turn, strength, are commonly investigated by analyzing microstructural elements such as crystallographic preferred orientation (CPO) and recrystallized grain size. Here, we investigated the effect of variations in the volume fraction and the geometry of rigid second phases on the strength and evolution of CPO and grain size of synthetic calcite rocks. Experiments using triaxial compression and torsional loading were conducted at 1023 K and equivalent strain rates between ~2e-6 and 1e-3 s-1. The second phases in these synthetic assemblages are rigid carbon spheres or splinters with known particle size distributions and geometries, which are chemically inert at our experimental conditions. Under hydrostatic conditions, the addition of as little as 1 vol.% carbon spheres poisons normal grain growth. Shape is also important: for an equivalent volume fraction and grain dimension, carbon splinters result in a finer calcite grain size than carbon spheres. In samples deformed at “high” strain rates, or which have “large” mean free spacing of the pinning phase, the final recrystallized grain size is well explained by competing grain growth and grain size reduction processes, where the grain-size reduction rate is determined by the rate that mechanical work is done during deformation. In these samples, the final grain size is finer than in samples heat-treated hydrostatically for equivalent durations. The addition of 1 vol.% spheres to calcite has little effect on either the strength or CPO development. Adding 10 vol.% splinters increases the strength at low strains and low strain rates, but has little effect on the strength at high strains and/or high strain rates, compared to pure samples. A CPO similar to that in pure samples is observed, although the intensity is reduced in samples containing 10 vol.% splinters. When 10 vol.% spheres are added to calcite, the strength of the aggregate is reduced, and a distinct and strong CPO develops. Viscoplastic self consistent calculations were used to model the evolution of CPO in these materials, and these suggest a variation in the activity of the various slip systems within pure samples and those containing 10 vol.% spheres. The applicability of these laboratory observations has been tested with field-based observations made in the Morcles Nappe (Swiss Helvetic Alps). In the Morcles Nappe, calcite grain size becomes progressively finer as the thrust contact is approached, and there is a concomitant increase in CPO intensity, with the strongest CPO’s in the finest-grained, quartz-rich limestones, nearest the thrust contact, which are interpreted to have been deformed to the highest strains. Thus, our laboratory results may be used to provide insight into the distribution of strain observed in natural shear zones.

  4. Effects of sampling techniques on physical parameters and concentrations of selected persistent organic pollutants in suspended matter.

    PubMed

    Pohlert, Thorsten; Hillebrand, Gudrun; Breitung, Vera

    2011-06-01

    This study focusses on the effect of sampling techniques for suspended matter in stream water on subsequent particle-size distribution and concentrations of total organic carbon and selected persistent organic pollutants. The key questions are whether differences between the sampling techniques are due to the separation principle of the devices or due to the difference between time-proportional versus integral sampling. Several multivariate homogeneity tests were conducted on an extensive set of field-data that covers the period from 2002 to 2007, when up to three different sampling techniques were deployed in parallel at four monitoring stations of the River Rhine. The results indicate homogeneity for polychlorinated biphenyls, but significant effects due to the sampling techniques on particle-size, organic carbon and hexachlorobenzene. The effects can be amplified depending on the site characteristics of the monitoring stations.

  5. The relationship between structural stability and electrochemical performance of multi-element doped alpha nickel hydroxide

    NASA Astrophysics Data System (ADS)

    Miao, Chengcheng; Zhu, Yanjuan; Huang, Liangguo; Zhao, Tengqi

    2015-01-01

    The multi-element doped alpha nickel hydroxide has been prepared by supersonic co-precipitation method. Three kinds of samples A, B and C are prepared by chemically coprecipitating Ni/Al, Ni/Al/Mn and Ni/Al/Mn/Yb, respectively. Inductively coupled plasma atomic emission spectroscopy (ICP-AES), Particle size distribution (PSD) measurement, X-ray diffraction (XRD), scanning electron microscope (SEM) and Fourier transform infrared spectroscopy (FT-IR) are used to characterize the physical properties of the synthesized α-Ni(OH)2 samples, such as chemical composition, morphology, structural stability of the crystal. The results show that all samples are nano-sized materials and the interlayer spacing becomes larger and the structural stability becomes better with the increase of doped elements and doped ratio. The prepared alpha nickel hydroxide samples are added into micro-sized beta nickel hydroxide to form biphase electrode materials for Ni-MH battery. The electrochemical characterization of the biphase electrodes, including cyclic voltammetry (CV) and charge/discharge test, are also performed. The results demonstrate that the biphase electrode with sample C exhibits better electrochemical reversibility and cyclic stability, higher charge efficient and discharge potential, larger proton diffusion coefficient (5.81 × 10-12 cm2 s-1) and discharge capacity (309.0 mAh g-1). Hence, it indicates that all doped elements can produce the synergic effect and further improve the electrochemical properties of the alpha nickel hydroxide.

  6. At convenience and systematic random sampling: effects on the prognostic value of nuclear area assessments in breast cancer patients.

    PubMed

    Jannink, I; Bennen, J N; Blaauw, J; van Diest, P J; Baak, J P

    1995-01-01

    This study compares the influence of two different nuclear sampling methods on the prognostic value of assessments of mean and standard deviation of nuclear area (MNA, SDNA) in 191 consecutive invasive breast cancer patients with long term follow up. The first sampling method used was 'at convenience' sampling (ACS); the second, systematic random sampling (SRS). Both sampling methods were tested with a sample size of 50 nuclei (ACS-50 and SRS-50). To determine whether, besides the sampling methods, sample size had impact on prognostic value as well, the SRS method was also tested using a sample size of 100 nuclei (SRS-100). SDNA values were systematically lower for ACS, obviously due to (unconsciously) not including small and large nuclei. Testing prognostic value of a series of cut off points, MNA and SDNA values assessed by the SRS method were prognostically significantly stronger than the values obtained by the ACS method. This was confirmed in Cox regression analysis. For the MNA, the Mantel-Cox p-values from SRS-50 and SRS-100 measurements were not significantly different. However, for the SDNA, SRS-100 yielded significantly lower p-values than SRS-50. In conclusion, compared with the 'at convenience' nuclear sampling method, systematic random sampling of nuclei is not only superior with respect to reproducibility of results, but also provides a better prognostic value in patients with invasive breast cancer.

  7. Thermal conductivity of nanocrystalline silicon: importance of grain size and frequency-dependent mean free paths.

    PubMed

    Wang, Zhaojie; Alaniz, Joseph E; Jang, Wanyoung; Garay, Javier E; Dames, Chris

    2011-06-08

    The thermal conductivity reduction due to grain boundary scattering is widely interpreted using a scattering length assumed equal to the grain size and independent of the phonon frequency (gray). To assess these assumptions and decouple the contributions of porosity and grain size, five samples of undoped nanocrystalline silicon have been measured with average grain sizes ranging from 550 to 64 nm and porosities from 17% to less than 1%, at temperatures from 310 to 16 K. The samples were prepared using current activated, pressure assisted densification (CAPAD). At low temperature the thermal conductivities of all samples show a T(2) dependence which cannot be explained by any traditional gray model. The measurements are explained over the entire temperature range by a new frequency-dependent model in which the mean free path for grain boundary scattering is inversely proportional to the phonon frequency, which is shown to be consistent with asymptotic analysis of atomistic simulations from the literature. In all cases the recommended boundary scattering length is smaller than the average grain size. These results should prove useful for the integration of nanocrystalline materials in devices such as advanced thermoelectrics.

  8. Authoritarian Parenting and Asian Adolescent School Performance: Insights from the US and Taiwan

    PubMed Central

    Pong, Suet-ling; Johnston, Jamie; Chen, Vivien

    2014-01-01

    Our study re-examines the relationship between parenting and school performance among Asian students. We use two sources of data: wave I of the Adolescent Health Longitudinal Survey (Add Health), and waves I and II of the Taiwan Educational Panel Survey (TEPS). Analysis using Add Health reveals that the Asian-American/European-American difference in the parenting–school performance relationship is due largely to differential sample sizes. When we select a random sample of European-American students comparable to the sample size of Asian-American students, authoritarian parenting also shows no effect for European-American students. Furthermore, analysis of TEPS shows that authoritarian parenting is negatively associated with children's school achievement, while authoritative parenting is positively associated. This result for Taiwanese Chinese students is similar to previous results for European-American students in the US. PMID:24850978

  9. Authoritarian Parenting and Asian Adolescent School Performance: Insights from the US and Taiwan.

    PubMed

    Pong, Suet-Ling; Johnston, Jamie; Chen, Vivien

    2010-01-01

    Our study re-examines the relationship between parenting and school performance among Asian students. We use two sources of data: wave I of the Adolescent Health Longitudinal Survey (Add Health), and waves I and II of the Taiwan Educational Panel Survey (TEPS). Analysis using Add Health reveals that the Asian-American/European-American difference in the parenting-school performance relationship is due largely to differential sample sizes. When we select a random sample of European-American students comparable to the sample size of Asian-American students, authoritarian parenting also shows no effect for European-American students. Furthermore, analysis of TEPS shows that authoritarian parenting is negatively associated with children's school achievement, while authoritative parenting is positively associated. This result for Taiwanese Chinese students is similar to previous results for European-American students in the US.

  10. Particle size fractionation of paralytic shellfish toxins (PSTs): seasonal distribution and bacterial production in the St Lawrence estuary, Canada.

    PubMed

    Michaud, S; Levasseur, M; Doucette, G; Cantin, G

    2002-10-01

    We determined the seasonal distribution of paralytic shellfish toxins (PSTs) and PST producing bacteria in > 15, 5-15, and 0.22-5 microm size fractions in the St Lawrence. We also measured PSTs in a local population of Mytilus edulis. PST concentrations were determined in each size fraction and in laboratory incubations of sub-samples by high performance liquid chromatography (HPLC), including the rigorous elimination of suspected toxin 'imposter' peaks. Mussel toxin levels were determined by mouse bioassay and HPLC. PSTs were detected in all size fractions during the summer sampling season, with 47% of the water column toxin levels associated with particles smaller than Alexandrium tamarense (< 15 microm). Even in the > 15 microm size fraction, we estimated that as much as 92% of PSTs could be associated with particles other than A. tamarense. Our results stress the importance of taking into account the potential presence of PSTs in size fractions other than that containing the known algal producer when attempting to model shellfish intoxication, especially during years of low cell abundance. Finally, our HPLC results confirmed the presence of bacteria capable of autonomous PST production in the St Lawrence as well as demonstrating their regular presence and apparent diversity in the plankton. Copyright 2002 Elsevier Science Ltd.

  11. Structural properties and gas sensing behavior of sol-gel grown nanostructured zinc oxide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajyaguru, Bhargav; Gadani, Keval; Kansara, S. B.

    2016-05-06

    In this communication, we report the results of the studies on structural properties and gas sensing behavior of nanostructured ZnO grown using acetone precursor based modified sol-gel technique. Final product of ZnO was sintered at different temperatures to vary the crystallite size while their structural properties have been studied using X-ray diffraction (XRD) measurement performed at room temperature. XRD results suggest the single phasic nature of all the samples and crystallite size increases from 11.53 to 20.96 nm with increase in sintering temperature. Gas sensing behavior has been studied for acetone gas which indicates that lower sintered samples are moremore » capable to sense the acetone gas and related mechanism has been discussed in the light of crystallite size, crystal boundary density, defect mechanism and possible chemical reaction between gas traces and various oxygen species.« less

  12. An estimate of field size distributions for selected sites in the major grain producing countries

    NASA Technical Reports Server (NTRS)

    Podwysocki, M. H.

    1977-01-01

    The field size distributions for the major grain producing countries of the World were estimated. LANDSAT-1 and 2 images were evaluated for two areas each in the United States, People's Republic of China, and the USSR. One scene each was evaluated for France, Canada, and India. Grid sampling was done for representative sub-samples of each image, measuring the long and short axes of each field; area was then calculated. Each of the resulting data sets was computer analyzed for their frequency distributions. Nearly all frequency distributions were highly peaked and skewed (shifted) towards small values, approaching that of either a Poisson or log-normal distribution. The data were normalized by a log transformation, creating a Gaussian distribution which has moments readily interpretable and useful for estimating the total population of fields. Resultant predictors of the field size estimates are discussed.

  13. The impact of hypnotic suggestibility in clinical care settings.

    PubMed

    Montgomery, Guy H; Schnur, Julie B; David, Daniel

    2011-07-01

    Hypnotic suggestibility has been described as a powerful predictor of outcomes associated with hypnotic interventions. However, there have been no systematic approaches to quantifying this effect across the literature. This meta-analysis evaluates the magnitude of the effect of hypnotic suggestibility on hypnotic outcomes in clinical settings. PsycINFO and PubMed were searched from their inception through July 2009. Thirty-four effects from 10 studies and 283 participants are reported. Results revealed a statistically significant overall effect size in the small to medium range (r = .24; 95% Confidence Interval = -0.28 to 0.75), indicating that greater hypnotic suggestibility led to greater effects of hypnosis interventions. Hypnotic suggestibility accounted for 6% of the variance in outcomes. Smaller sample size studies, use of the SHCS, and pediatric samples tended to result in larger effect sizes. The authors question the usefulness of assessing hypnotic suggestibility in clinical contexts.

  14. Sequential sampling: a novel method in farm animal welfare assessment.

    PubMed

    Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J

    2016-02-01

    Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.

  15. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    PubMed

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  16. Catch me if you can: Comparing ballast water sampling skids to traditional net sampling

    NASA Astrophysics Data System (ADS)

    Bradie, Johanna; Gianoli, Claudio; Linley, Robert Dallas; Schillak, Lothar; Schneider, Gerd; Stehouwer, Peter; Bailey, Sarah

    2018-03-01

    With the recent ratification of the International Convention for the Control and Management of Ships' Ballast Water and Sediments, 2004, it will soon be necessary to assess ships for compliance with ballast water discharge standards. Sampling skids that allow the efficient collection of ballast water samples in a compact space have been developed for this purpose. We ran 22 trials on board the RV Meteor from June 4-15, 2015 to evaluate the performance of three ballast water sampling devices (traditional plankton net, Triton sampling skid, SGS sampling skid) for three organism size classes: ≥ 50 μm, ≥ 10 μm to < 50 μm, and < 10 μm. Natural sea water was run through the ballast water system and untreated samples were collected using paired sampling devices. Collected samples were analyzed in parallel by multiple analysts using several different analytic methods to quantify organism concentrations. To determine whether there were differences in the number of viable organisms collected across sampling devices, results were standardized and statistically treated to filter out other sources of variability, resulting in an outcome variable representing the mean difference in measurements that can be attributed to sampling devices. These results were tested for significance using pairwise Tukey contrasts. Differences in organism concentrations were found in 50% of comparisons between sampling skids and the plankton net for ≥ 50 μm, and ≥ 10 μm to < 50 μm size classes, with net samples containing either higher or lower densities. There were no differences for < 10 μm organisms. Future work will be required to explicitly examine the potential effects of flow velocity, sampling duration, sampled volume, and organism concentrations on sampling device performance.

  17. The Effect of Size Fraction in Analyses of Benthic Foraminifera Assemblages: A Case Study Comparing Assemblages from the >125 μm and >150 μm Size Fractions

    NASA Astrophysics Data System (ADS)

    Weinkauf, Manuel F. G.; Milker, Yvonne

    2018-05-01

    Benthic Foraminifera assemblages are employed for past environmental reconstructions, as well as for biomonitoring studies in recent environments. Despite their established status for such applications, and existing protocols for sample treatment, not all studies using benthic Foraminifera employ the same methodology. For instance, there is no broad practical consensus whether to use the >125 µm or >150 µm size fraction for benthic foraminiferal assemblage analyses. Here, we use early Pleistocene material from the Pefka E section on the Island of Rhodes (Greece), which has been counted in both size fractions, to investigate whether a 25 µm difference in the counted fraction is already sufficient to have an impact on ecological studies. We analysed the influence of the difference in size fraction on studies of biodiversity as well as multivariate assemblage analyses of the sample material. We found that for both types of studies, the general trends remain the same regardless of the chosen size fraction, but in detail significant differences emerge which are not consistently distributed between samples. Studies which require a high degree of precision can thus not compare results from analyses that used different size fractions, and the inconsistent distribution of differences makes it impossible to develop corrections for this issue. We therefore advocate the consistent use of the >125 µm size fraction for benthic foraminiferal studies in the future.

  18. Particle size analysis on density, surface morphology and specific capacitance of carbon electrode from rubber wood sawdust

    NASA Astrophysics Data System (ADS)

    Taer, E.; Kurniasih, B.; Sari, F. P.; Zulkifli, Taslim, R.; Sugianto, Purnama, A.; Apriwandi, Susanti, Y.

    2018-02-01

    The particle size analysis for supercapacitor carbon electrodes from rubber wood sawdust (SGKK) has been done successfully. The electrode particle size was reviewed against the properties such as density, degree of crystallinity, surface morphology and specific capacitance. The variations in particle size were made by different treatment on the grinding and sieving process. The sample particle size was distinguished as 53-100 µm for 20 h (SA), 38-53 µm for 20 h (SB) and < 38 µm with variations of grinding time for 40 h (SC) and 80 h (SD) respectively. All of the samples were activated by 0.4 M KOH solution. Carbon electrodes were carbonized at temperature of 600oC in N2 gas environment and then followed by CO2 gas activation at a temperature of 900oC for 2 h. The densities for each variation in the particle size were 1.034 g cm-3, 0.849 g cm-3, 0.892 g cm-3 and 0.982 g cm-3 respectively. The morphological study identified the distance between the particles more closely at 38-53 µm (SB) particle size. The electrochemical properties of supercapacitor cells have been investigated using electrochemical methods such as impedance spectroscopy and charge-discharge at constant current using Solatron 1280 tools. Electrochemical properties testing results have shown SB samples with a particle size of 38-53 µm produce supercapacitor cells with optimum capacitive performance.

  19. Performance of the likelihood ratio difference (G2 Diff) test for detecting unidimensionality in applications of the multidimensional Rasch model.

    PubMed

    Harrell-Williams, Leigh; Wolfe, Edward W

    2014-01-01

    Previous research has investigated the influence of sample size, model misspecification, test length, ability distribution offset, and generating model on the likelihood ratio difference test in applications of item response models. This study extended that research to the evaluation of dimensionality using the multidimensional random coefficients multinomial logit model (MRCMLM). Logistic regression analysis of simulated data reveal that sample size and test length have a large effect on the capacity of the LR difference test to correctly identify unidimensionality, with shorter tests and smaller sample sizes leading to smaller Type I error rates. Higher levels of simulated misfit resulted in fewer incorrect decisions than data with no or little misfit. However, Type I error rates indicate that the likelihood ratio difference test is not suitable under any of the simulated conditions for evaluating dimensionality in applications of the MRCMLM.

  20. Imaging of zymogen granules in fully wet cells: evidence for restricted mechanism of granule growth.

    PubMed

    Hammel, Ilan; Anaby, Debbie

    2007-09-01

    The introduction of wet SEM imaging technology permits electron microscopy of wet samples. Samples are placed in sealed specimen capsules and are insulated from the vacuum in the SEM chamber by an impermeable, electron-transparent membrane. The complete insulation of the sample from the vacuum allows direct imaging of fully hydrated, whole-mount tissue. In the current work, we demonstrate direct inspection of thick pancreatic tissue slices (above 400 mum). In the case of scanning of the pancreatic surface, the boundaries of intracellular features are seen directly. Thus no unfolding is required to ascertain the actual particle size distribution based on the sizes of the sections. This method enabled us to investigate the true granule size distribution and confirm early studies of improved conformity to a Poisson-like distribution, suggesting that the homotypic granule growth results from a mechanism, which favors the addition of a single unit granule to mature granules.

  1. Code Saturation Versus Meaning Saturation: How Many Interviews Are Enough?

    PubMed

    Hennink, Monique M; Kaiser, Bonnie N; Marconi, Vincent C

    2017-03-01

    Saturation is a core guiding principle to determine sample sizes in qualitative research, yet little methodological research exists on parameters that influence saturation. Our study compared two approaches to assessing saturation: code saturation and meaning saturation. We examined sample sizes needed to reach saturation in each approach, what saturation meant, and how to assess saturation. Examining 25 in-depth interviews, we found that code saturation was reached at nine interviews, whereby the range of thematic issues was identified. However, 16 to 24 interviews were needed to reach meaning saturation where we developed a richly textured understanding of issues. Thus, code saturation may indicate when researchers have "heard it all," but meaning saturation is needed to "understand it all." We used our results to develop parameters that influence saturation, which may be used to estimate sample sizes for qualitative research proposals or to document in publications the grounds on which saturation was achieved.

  2. Influence of Sample Size of Polymer Materials on Aging Characteristics in the Salt Fog Test

    NASA Astrophysics Data System (ADS)

    Otsubo, Masahisa; Anami, Naoya; Yamashita, Seiji; Honda, Chikahisa; Takenouchi, Osamu; Hashimoto, Yousuke

    Polymer insulators have been used in worldwide because of some superior properties; light weight, high mechanical strength, good hydrophobicity etc., as compared with porcelain insulators. In this paper, effect of sample size on the aging characteristics in the salt fog test is examined. Leakage current was measured by using 100 MHz AD board or 100 MHz digital oscilloscope and separated three components as conductive current, corona discharge current and dry band arc discharge current by using FFT and the current differential method newly proposed. Each component cumulative charge was estimated automatically by a personal computer. As the results, when the sample size increased under the same average applied electric field, the peak values of leakage current and each component current increased. Especially, the cumulative charges and the arc discharge length of dry band arc discharge increased remarkably with the increase of gap length.

  3. Experimental strategies for imaging bioparticles with femtosecond hard X-ray pulses

    PubMed Central

    Okamoto, Kenta; Bielecki, Johan; Maia, Filipe R. N. C.; Mühlig, Kerstin; Seibert, M. Marvin; Hantke, Max F.; Benner, W. Henry; Svenda, Martin; Ekeberg, Tomas; Loh, N. Duane; Pietrini, Alberto; Zani, Alessandro; Rath, Asawari D.; Westphal, Daniel; Kirian, Richard A.; Awel, Salah; Wiedorn, Max O.; van der Schot, Gijs; Carlsson, Gunilla H.; Hasse, Dirk; Sellberg, Jonas A.; Barty, Anton; Andreasson, Jakob; Boutet, Sébastien; Williams, Garth; Koglin, Jason; Hajdu, Janos; Larsson, Daniel S. D.

    2017-01-01

    This study explores the capabilities of the Coherent X-ray Imaging Instrument at the Linac Coherent Light Source to image small biological samples. The weak signal from small samples puts a significant demand on the experiment. Aerosolized Omono River virus particles of ∼40 nm in diameter were injected into the submicrometre X-ray focus at a reduced pressure. Diffraction patterns were recorded on two area detectors. The statistical nature of the measurements from many individual particles provided information about the intensity profile of the X-ray beam, phase variations in the wavefront and the size distribution of the injected particles. The results point to a wider than expected size distribution (from ∼35 to ∼300 nm in diameter). This is likely to be owing to nonvolatile contaminants from larger droplets during aerosolization and droplet evaporation. The results suggest that the concentration of nonvolatile contaminants and the ratio between the volumes of the initial droplet and the sample particles is critical in such studies. The maximum beam intensity in the focus was found to be 1.9 × 1012 photons per µm2 per pulse. The full-width of the focus at half-maximum was estimated to be 500 nm (assuming 20% beamline transmission), and this width is larger than expected. Under these conditions, the diffraction signal from a sample-sized particle remained above the average background to a resolution of 4.25 nm. The results suggest that reducing the size of the initial droplets during aerosolization is necessary to bring small particles into the scope of detailed structural studies with X-ray lasers. PMID:28512572

  4. Effect of Study Design on Sample Size in Studies Intended to Evaluate Bioequivalence of Inhaled Short-Acting β-Agonist Formulations.

    PubMed

    Zeng, Yaohui; Singh, Sachinkumar; Wang, Kai; Ahrens, Richard C

    2018-04-01

    Pharmacodynamic studies that use methacholine challenge to assess bioequivalence of generic and innovator albuterol formulations are generally designed per published Food and Drug Administration guidance, with 3 reference doses and 1 test dose (3-by-1 design). These studies are challenging and expensive to conduct, typically requiring large sample sizes. We proposed 14 modified study designs as alternatives to the Food and Drug Administration-recommended 3-by-1 design, hypothesizing that adding reference and/or test doses would reduce sample size and cost. We used Monte Carlo simulation to estimate sample size. Simulation inputs were selected based on published studies and our own experience with this type of trial. We also estimated effects of these modified study designs on study cost. Most of these altered designs reduced sample size and cost relative to the 3-by-1 design, some decreasing cost by more than 40%. The most effective single study dose to add was 180 μg of test formulation, which resulted in an estimated 30% relative cost reduction. Adding a single test dose of 90 μg was less effective, producing only a 13% cost reduction. Adding a lone reference dose of either 180, 270, or 360 μg yielded little benefit (less than 10% cost reduction), whereas adding 720 μg resulted in a 19% cost reduction. Of the 14 study design modifications we evaluated, the most effective was addition of both a 90-μg test dose and a 720-μg reference dose (42% cost reduction). Combining a 180-μg test dose and a 720-μg reference dose produced an estimated 36% cost reduction. © 2017, The Authors. The Journal of Clinical Pharmacology published by Wiley Periodicals, Inc. on behalf of American College of Clinical Pharmacology.

  5. Physicochemical Characterization of Capstone Depleted Uranium Aerosols I: Uranium Concentration in Aerosols as a Function of Time and Particle Size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parkhurst, MaryAnn; Cheng, Yung-Sung; Kenoyer, Judson L.

    2009-03-01

    During the Capstone Depleted Uranium (DU) Aerosol Study, aerosols containing depleted uranium were produced inside unventilated armored vehicles (i.e., Abrams tanks and Bradley Fighting Vehicles) by perforation with large-caliber DU penetrators. These aerosols were collected and characterized, and the data were subsequently used to assess human health risks to personnel exposed to DU aerosols. The DU content of each aerosol sample was first quantified by radioanalytical methods, and selected samples, primarily those from the cyclone separator grit chambers, were analyzed radiochemically. Deposition occurred inside the vehicles as particles settled on interior surfaces. Settling rates of uranium from the aerosols weremore » evaluated using filter cassette samples that collected aerosol as total mass over eight sequential time intervals. A moving filter was used to collect aerosol samples over time particularly within the first minute after the shot. The results demonstrate that the peak uranium concentration in the aerosol occurred in the first 10 s, and the concentration decreased in the Abrams tank shots to about 50% within 1 min and to less than 2% 30 min after perforation. In the Bradley vehicle, the initial (and maximum) uranium concentration was lower than those observed in the Abrams tank and decreased more slowly. Uranium mass concentrations in the aerosols as a function of particle size were evaluated using samples collected in the cyclone samplers, which collected aerosol continuously for 2 h post perforation. The percentages of uranium mass in the cyclone separator stages from the Abrams tank tests ranged from 38% to 72% and, in most cases, varied with particle size, typically with less uranium associated with the smaller particle sizes. Results with the Bradley vehicle ranged from 18% to 29% and were not specifically correlated with particle size.« less

  6. SU-E-I-46: Sample-Size Dependence of Model Observers for Estimating Low-Contrast Detection Performance From CT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiser, I; Lu, Z

    2014-06-01

    Purpose: Recently, task-based assessment of diagnostic CT systems has attracted much attention. Detection task performance can be estimated using human observers, or mathematical observer models. While most models are well established, considerable bias can be introduced when performance is estimated from a limited number of image samples. Thus, the purpose of this work was to assess the effect of sample size on bias and uncertainty of two channelized Hotelling observers and a template-matching observer. Methods: The image data used for this study consisted of 100 signal-present and 100 signal-absent regions-of-interest, which were extracted from CT slices. The experimental conditions includedmore » two signal sizes and five different x-ray beam current settings (mAs). Human observer performance for these images was determined in 2-alternative forced choice experiments. These data were provided by the Mayo clinic in Rochester, MN. Detection performance was estimated from three observer models, including channelized Hotelling observers (CHO) with Gabor or Laguerre-Gauss (LG) channels, and a template-matching observer (TM). Different sample sizes were generated by randomly selecting a subset of image pairs, (N=20,40,60,80). Observer performance was quantified as proportion of correct responses (PC). Bias was quantified as the relative difference of PC for 20 and 80 image pairs. Results: For n=100, all observer models predicted human performance across mAs and signal sizes. Bias was 23% for CHO (Gabor), 7% for CHO (LG), and 3% for TM. The relative standard deviation, σ(PC)/PC at N=20 was highest for the TM observer (11%) and lowest for the CHO (Gabor) observer (5%). Conclusion: In order to make image quality assessment feasible in the clinical practice, a statistically efficient observer model, that can predict performance from few samples, is needed. Our results identified two observer models that may be suited for this task.« less

  7. Physicochemical characterization of Capstone depleted uranium aerosols I: uranium concentration in aerosols as a function of time and particle size.

    PubMed

    Parkhurst, Mary Ann; Cheng, Yung Sung; Kenoyer, Judson L; Traub, Richard J

    2009-03-01

    During the Capstone Depleted Uranium (DU) Aerosol Study, aerosols containing DU were produced inside unventilated armored vehicles (i.e., Abrams tanks and Bradley Fighting Vehicles) by perforation with large-caliber DU penetrators. These aerosols were collected and characterized, and the data were subsequently used to assess human health risks to personnel exposed to DU aerosols. The DU content of each aerosol sample was first quantified by radioanalytical methods, and selected samples, primarily those from the cyclone separator grit chambers, were analyzed radiochemically. Deposition occurred inside the vehicles as particles settled on interior surfaces. Settling rates of uranium from the aerosols were evaluated using filter cassette samples that collected aerosol as total mass over eight sequential time intervals. A moving filter was used to collect aerosol samples over time, particularly within the first minute after a shot. The results demonstrate that the peak uranium concentration in the aerosol occurred in the first 10 s after perforation, and the concentration decreased in the Abrams tank shots to about 50% within 1 min and to less than 2% after 30 min. The initial and maximum uranium concentrations were lower in the Bradley vehicle than those observed in the Abrams tank, and the concentration levels decreased more slowly. Uranium mass concentrations in the aerosols as a function of particle size were evaluated using samples collected in a cyclone sampler, which collected aerosol continuously for 2 h after perforation. The percentages of uranium mass in the cyclone separator stages ranged from 38 to 72% for the Abrams tank with conventional armor. In most cases, it varied with particle size, typically with less uranium associated with the smaller particle sizes. Neither the Abrams tank with DU armor nor the Bradley vehicle results were specifically correlated with particle size and can best be represented by their average uranium mass concentrations of 65 and 24%, respectively.

  8. Critical appraisal of arguments for the delayed-start design proposed as alternative to the parallel-group randomized clinical trial design in the field of rare disease.

    PubMed

    Spineli, Loukia M; Jenz, Eva; Großhennig, Anika; Koch, Armin

    2017-08-17

    A number of papers have proposed or evaluated the delayed-start design as an alternative to the standard two-arm parallel group randomized clinical trial (RCT) design in the field of rare disease. However the discussion is felt to lack a sufficient degree of consideration devoted to the true virtues of the delayed start design and the implications either in terms of required sample-size, overall information, or interpretation of the estimate in the context of small populations. To evaluate whether there are real advantages of the delayed-start design particularly in terms of overall efficacy and sample size requirements as a proposed alternative to the standard parallel group RCT in the field of rare disease. We used a real-life example to compare the delayed-start design with the standard RCT in terms of sample size requirements. Then, based on three scenarios regarding the development of the treatment effect over time, the advantages, limitations and potential costs of the delayed-start design are discussed. We clarify that delayed-start design is not suitable for drugs that establish an immediate treatment effect, but for drugs with effects developing over time, instead. In addition, the sample size will always increase as an implication for a reduced time on placebo resulting in a decreased treatment effect. A number of papers have repeated well-known arguments to justify the delayed-start design as appropriate alternative to the standard parallel group RCT in the field of rare disease and do not discuss the specific needs of research methodology in this field. The main point is that a limited time on placebo will result in an underestimated treatment effect and, in consequence, in larger sample size requirements compared to those expected under a standard parallel-group design. This also impacts on benefit-risk assessment.

  9. Quantitative characterisation of sedimentary grains

    NASA Astrophysics Data System (ADS)

    Tunwal, Mohit; Mulchrone, Kieran F.; Meere, Patrick A.

    2016-04-01

    Analysis of sedimentary texture helps in determining the formation, transportation and deposition processes of sedimentary rocks. Grain size analysis is traditionally quantitative, whereas grain shape analysis is largely qualitative. A semi-automated approach to quantitatively analyse shape and size of sand sized sedimentary grains is presented. Grain boundaries are manually traced from thin section microphotographs in the case of lithified samples and are automatically identified in the case of loose sediments. Shape and size paramters can then be estimated using a software package written on the Mathematica platform. While automated methodology already exists for loose sediment analysis, the available techniques for the case of lithified samples are limited to cases of high definition thin section microphotographs showing clear contrast between framework grains and matrix. Along with the size of grain, shape parameters such as roundness, angularity, circularity, irregularity and fractal dimension are measured. A new grain shape parameter developed using Fourier descriptors has also been developed. To test this new approach theoretical examples were analysed and produce high quality results supporting the accuracy of the algorithm. Furthermore sandstone samples from known aeolian and fluvial environments from the Dingle Basin, County Kerry, Ireland were collected and analysed. Modern loose sediments from glacial till from County Cork, Ireland and aeolian sediments from Rajasthan, India have also been collected and analysed. A graphical summary of the data is presented and allows for quantitative distinction between samples extracted from different sedimentary environments.

  10. Performance of small cluster surveys and the clustered LQAS design to estimate local-level vaccination coverage in Mali.

    PubMed

    Minetti, Andrea; Riera-Montes, Margarita; Nackers, Fabienne; Roederer, Thomas; Koudika, Marie Hortense; Sekkenes, Johanne; Taconet, Aurore; Fermon, Florence; Touré, Albouhary; Grais, Rebecca F; Checchi, Francesco

    2012-10-12

    Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes.

  11. Optimally estimating the sample mean from the sample size, median, mid-range, and/or mid-quartile range.

    PubMed

    Luo, Dehui; Wan, Xiang; Liu, Jiming; Tong, Tiejun

    2018-06-01

    The era of big data is coming, and evidence-based medicine is attracting increasing attention to improve decision making in medical practice via integrating evidence from well designed and conducted clinical research. Meta-analysis is a statistical technique widely used in evidence-based medicine for analytically combining the findings from independent clinical trials to provide an overall estimation of a treatment effectiveness. The sample mean and standard deviation are two commonly used statistics in meta-analysis but some trials use the median, the minimum and maximum values, or sometimes the first and third quartiles to report the results. Thus, to pool results in a consistent format, researchers need to transform those information back to the sample mean and standard deviation. In this article, we investigate the optimal estimation of the sample mean for meta-analysis from both theoretical and empirical perspectives. A major drawback in the literature is that the sample size, needless to say its importance, is either ignored or used in a stepwise but somewhat arbitrary manner, e.g. the famous method proposed by Hozo et al. We solve this issue by incorporating the sample size in a smoothly changing weight in the estimators to reach the optimal estimation. Our proposed estimators not only improve the existing ones significantly but also share the same virtue of the simplicity. The real data application indicates that our proposed estimators are capable to serve as "rules of thumb" and will be widely applied in evidence-based medicine.

  12. Sample size and power calculations for detecting changes in malaria transmission using antibody seroconversion rate.

    PubMed

    Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris

    2015-12-30

    Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.

  13. Random Distribution Pattern and Non-adaptivity of Genome Size in a Highly Variable Population of Festuca pallens

    PubMed Central

    Šmarda, Petr; Bureš, Petr; Horová, Lucie

    2007-01-01

    Background and Aims The spatial and statistical distribution of genome sizes and the adaptivity of genome size to some types of habitat, vegetation or microclimatic conditions were investigated in a tetraploid population of Festuca pallens. The population was previously documented to vary highly in genome size and is assumed as a model for the study of the initial stages of genome size differentiation. Methods Using DAPI flow cytometry, samples were measured repeatedly with diploid Festuca pallens as the internal standard. Altogether 172 plants from 57 plots (2·25 m2), distributed in contrasting habitats over the whole locality in South Moravia, Czech Republic, were sampled. The differences in DNA content were confirmed by the double peaks of simultaneously measured samples. Key Results At maximum, a 1·115-fold difference in genome size was observed. The statistical distribution of genome sizes was found to be continuous and best fits the extreme (Gumbel) distribution with rare occurrences of extremely large genomes (positive-skewed), as it is similar for the log-normal distribution of the whole Angiosperms. Even plants from the same plot frequently varied considerably in genome size and the spatial distribution of genome sizes was generally random and unautocorrelated (P > 0·05). The observed spatial pattern and the overall lack of correlations of genome size with recognized vegetation types or microclimatic conditions indicate the absence of ecological adaptivity of genome size in the studied population. Conclusions These experimental data on intraspecific genome size variability in Festuca pallens argue for the absence of natural selection and the selective non-significance of genome size in the initial stages of genome size differentiation, and corroborate the current hypothetical model of genome size evolution in Angiosperms (Bennetzen et al., 2005, Annals of Botany 95: 127–132). PMID:17565968

  14. How beta diversity and the underlying causes vary with sampling scales in the Changbai mountain forests.

    PubMed

    Tan, Lingzhao; Fan, Chunyu; Zhang, Chunyu; von Gadow, Klaus; Fan, Xiuhua

    2017-12-01

    This study aims to establish a relationship between the sampling scale and tree species beta diversity temperate forests and to identify the underlying causes of beta diversity at different sampling scales. The data were obtained from three large observational study areas in the Changbai mountain region in northeastern China. All trees with a dbh ≥1 cm were stem-mapped and measured. The beta diversity was calculated for four different grain sizes, and the associated variances were partitioned into components explained by environmental and spatial variables to determine the contributions of environmental filtering and dispersal limitation to beta diversity. The results showed that both beta diversity and the causes of beta diversity were dependent on the sampling scale. Beta diversity decreased with increasing scales. The best-explained beta diversity variation was up to about 60% which was discovered in the secondary conifer and broad-leaved mixed forest (CBF) study area at the 40 × 40 m scale. The variation partitioning result indicated that environmental filtering showed greater effects at bigger grain sizes, while dispersal limitation was found to be more important at smaller grain sizes. What is more, the result showed an increasing explanatory ability of environmental effects with increasing sampling grains but no clearly trend of spatial effects. The study emphasized that the underlying causes of beta diversity variation may be quite different within the same region depending on varying sampling scales. Therefore, scale effects should be taken into account in future studies on beta diversity, which is critical in identifying different relative importance of spatial and environmental drivers on species composition variation.

  15. The Grain-size Patchiness of Braided Gravel-Bed Streams: Example of the Urumqi River (northeast Tian Shan, China)

    NASA Astrophysics Data System (ADS)

    Guerit, L.; Barrier, L.; Narteau, C.; Métivier, F.; Liu, Y.; Lajeunesse, E.; Gayer, E.; Malverti, L.; Meunier, P.; Ye, B.

    2012-04-01

    In gravel-beds rivers, sediments are sorted into patches of different grain-sizes. For single-thread streams, it has long been shown that this local granulometric sorting is closely linked to the channel morpho-sedimentary elements. For braided streams, this relation is still unclear. In such rivers, many observations of vertical sediment sorting has led to the definition of a surface and a subsurface layers. Because of this common stratification, methods for sampling gravel-bed rivers have been divided in two families. The surface layer is generally sampled by surface methods and the subsurface layer by volumetric methods. Yet, the equivalency between the two kind of techniques is still a key question. In this study, we characterized the grain-size distribution of the surface layer of the Urumqi River, a shallow braided gravel-bed river in China, by surface-count (Wolman grid-by-number) and volumetric (sieve-by-weight) sampling methods. An analysis of two large samples (212 grains and 3226 kg) show that these two methods are equivalent to characterize the river-bed surface layer. Then, we looked at the grain-size distributions of the river-bed morpho-sedimentary elements: (1) chutes at flow constrictions, which pass downstream to (2) anabranches and (3) bars at flow expansions. Using both sampling methods, we measured the diameter of more than 2300 grains and weight more than 6000 kg of grains larger than 4 mm. Our results show that the three morpho-sedimentary elements correspond only to two kinds of grain-size patches: (1) chutes composed of one coarse-grained top layer lying on finer deposits, and (2) anabranches and bars made up of finer-grained deposits more homogeneous in depth. On the basis of these quantitative observations, together with the concave or convex morphology of the different elements, we propose that chute patches form by erosion and transit with size-selective entrainment, whereas anabranch and bar patches rather develop and migrate by transit and deposition. These patch features may be typical of shallow braided gravel-bed rivers and should be considered in future works about on bedload transport processes and their geomorphologic and stratigraphic results.

  16. Estimation of sample size and testing power (Part 4).

    PubMed

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  17. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    PubMed

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  18. A Comparison Study of Normal-Incidence Acoustic Impedance Measurements of a Perforate Liner

    NASA Technical Reports Server (NTRS)

    Schultz, Todd; Liu, Fei; Cattafesta, Louis; Sheplak, Mark; Jones, Michael

    2009-01-01

    The eduction of the acoustic impedance for liner configurations is fundamental to the reduction of noise from modern jet engines. Ultimately, this property must be measured accurately for use in analytical and numerical propagation models of aircraft engine noise. Thus any standardized measurement techniques must be validated by providing reliable and consistent results for different facilities and sample sizes. This paper compares normal-incidence acoustic impedance measurements using the two-microphone method of ten nominally identical individual liner samples from two facilities, namely 50.8 mm and 25.4 mm square waveguides at NASA Langley Research Center and the University of Florida, respectively. The liner chosen for this investigation is a simple single-degree-of-freedom perforate liner with resonance and anti-resonance frequencies near 1.1 kHz and 2.2 kHz, respectively. The results show that the ten measurements have the most variation around the anti-resonance frequency, where statistically significant differences exist between the averaged results from the two facilities. However, the sample-to-sample variation is comparable in magnitude to the predicted cross-sectional area-dependent cavity dissipation differences between facilities, providing evidence that the size of the present samples does not significantly influence the results away from anti-resonance.

  19. Electrodeposition of Fe{sub 3}O{sub 4} layer from solution of Fe{sub 2}(SO{sub 4}){sub 3} with addition ethylene glycol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahlan, Dahyunir, E-mail: dahyunir@yahoo.com; Asrar, Allan

    2016-03-11

    The electrodeposition of Fe{sub 3}O{sub 4} layer from the solution Fe{sub 2}(SO{sub 4}){sub 3} with the addition of ethylene glycol on Indium Tin Oxide (ITO) substrate has been performed. The electrodeposition was carried out using a voltage of 5 volts for 120 seconds, with and without the addition of 2% wt ethylene glycol. Significant effects of temperature on the resulting the samples is observed when they are heated at 400 °C. Structural characterization using X-ray diffraction (XRD) shows that all samples produce a layer of Fe{sub 3}O{sub 4} with particle size less than 50 nanometers. The addition of ethylene glycolmore » and the heating of the sample causes a shrinkage in particle size. The scanning electron microscopy (SEM) characterization shows that Fe{sub 3}O{sub 4} layer resulting from the process of electrodeposition of Fe{sub 2}(SO{sub 4}){sub 3} without ethylene glycol, independent of whether the sample is heated or not, is uneven and buildup. Layer produced by the addition of ethylene glycol without heating produces spherical particles. On contrary, when the layer is heated the spherical particles transform to irregularly-shaped particles with smaller size.« less

  20. Methane Leaks from Natural Gas Systems Follow Extreme Distributions.

    PubMed

    Brandt, Adam R; Heath, Garvin A; Cooley, Daniel

    2016-11-15

    Future energy systems may rely on natural gas as a low-cost fuel to support variable renewable power. However, leaking natural gas causes climate damage because methane (CH 4 ) has a high global warming potential. In this study, we use extreme-value theory to explore the distribution of natural gas leak sizes. By analyzing ∼15 000 measurements from 18 prior studies, we show that all available natural gas leakage data sets are statistically heavy-tailed, and that gas leaks are more extremely distributed than other natural and social phenomena. A unifying result is that the largest 5% of leaks typically contribute over 50% of the total leakage volume. While prior studies used log-normal model distributions, we show that log-normal functions poorly represent tail behavior. Our results suggest that published uncertainty ranges of CH 4 emissions are too narrow, and that larger sample sizes are required in future studies to achieve targeted confidence intervals. Additionally, we find that cross-study aggregation of data sets to increase sample size is not recommended due to apparent deviation between sampled populations. Understanding the nature of leak distributions can improve emission estimates, better illustrate their uncertainty, allow prioritization of source categories, and improve sampling design. Also, these data can be used for more effective design of leak detection technologies.

Top