Sample records for determining sample size

  1. 7 CFR 51.1406 - Sample for grade or size determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., AND STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Sample for Grade Or Size Determination § 51.1406 Sample for grade or size determination. Each sample shall consist of 100 pecans. The...

  2. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    ERIC Educational Resources Information Center

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  3. Sample Size Determination for One- and Two-Sample Trimmed Mean Tests

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng

    2008-01-01

    Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…

  4. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    PubMed

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  5. Sample Size Determination for Regression Models Using Monte Carlo Methods in R

    ERIC Educational Resources Information Center

    Beaujean, A. Alexander

    2014-01-01

    A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…

  6. Neuromuscular dose-response studies: determining sample size.

    PubMed

    Kopman, A F; Lien, C A; Naguib, M

    2011-02-01

    Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.

  7. Biostatistics Series Module 5: Determining Sample Size

    PubMed Central

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437

  8. On Using a Pilot Sample Variance for Sample Size Determination in the Detection of Differences between Two Means: Power Consideration

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2013-01-01

    The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…

  9. Sample size determination for logistic regression on a logit-normal distribution.

    PubMed

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  10. Sample size determination for mediation analysis of longitudinal data.

    PubMed

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  11. Public Opinion Polls, Chicken Soup and Sample Size

    ERIC Educational Resources Information Center

    Nguyen, Phung

    2005-01-01

    Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.

  12. Sample size determination in group-sequential clinical trials with two co-primary endpoints

    PubMed Central

    Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi

    2014-01-01

    We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799

  13. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Sample size determination for estimating antibody seroconversion rate under stable malaria transmission intensity.

    PubMed

    Sepúlveda, Nuno; Drakeley, Chris

    2015-04-03

    In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.

  15. Determining the Population Size of Pond Phytoplankton.

    ERIC Educational Resources Information Center

    Hummer, Paul J.

    1980-01-01

    Discusses methods for determining the population size of pond phytoplankton, including water sampling techniques, laboratory analysis of samples, and additional studies worthy of investigation in class or as individual projects. (CS)

  16. Sample size determination for equivalence assessment with multiple endpoints.

    PubMed

    Sun, Anna; Dong, Xiaoyu; Tsong, Yi

    2014-01-01

    Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.

  17. Determining Sample Size with a Given Range of Mean Effects in One-Way Heteroscedastic Analysis of Variance

    ERIC Educational Resources Information Center

    Shieh, Gwowen; Jan, Show-Li

    2013-01-01

    The authors examined 2 approaches for determining the required sample size of Welch's test for detecting equality of means when the greatest difference between any 2 group means is given. It is shown that the actual power obtained with the sample size of the suggested approach is consistently at least as great as the nominal power. However, the…

  18. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    PubMed

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.

  19. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  20. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.

    PubMed

    Morgan, Timothy M; Case, L Douglas

    2013-07-05

    In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.

  1. Analysis of Sample Size, Counting Time, and Plot Size from an Avian Point Count Survey on Hoosier National Forest, Indiana

    Treesearch

    Frank R. Thompson; Monica J. Schwalbach

    1995-01-01

    We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...

  2. Determining sample size for tree utilization surveys

    Treesearch

    Stanley J. Zarnoch; James W. Bentley; Tony G. Johnson

    2004-01-01

    The U.S. Department of Agriculture Forest Service has conducted many studies to determine what proportion of the timber harvested in the South is actually utilized. This paper describes the statistical methods used to determine required sample sizes for estimating utilization ratios for a required level of precision. The data used are those for 515 hardwood and 1,557...

  3. A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models

    ERIC Educational Resources Information Center

    Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.

    2013-01-01

    Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…

  4. How Large Should a Statistical Sample Be?

    ERIC Educational Resources Information Center

    Menil, Violeta C.; Ye, Ruili

    2012-01-01

    This study serves as a teaching aid for teachers of introductory statistics. The aim of this study was limited to determining various sample sizes when estimating population proportion. Tables on sample sizes were generated using a C[superscript ++] program, which depends on population size, degree of precision or error level, and confidence…

  5. Sample sizes to control error estimates in determining soil bulk density in California forest soils

    Treesearch

    Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber

    2016-01-01

    Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...

  6. How large a training set is needed to develop a classifier for microarray data?

    PubMed

    Dobbin, Kevin K; Zhao, Yingdong; Simon, Richard M

    2008-01-01

    A common goal of gene expression microarray studies is the development of a classifier that can be used to divide patients into groups with different prognoses, or with different expected responses to a therapy. These types of classifiers are developed on a training set, which is the set of samples used to train a classifier. The question of how many samples are needed in the training set to produce a good classifier from high-dimensional microarray data is challenging. We present a model-based approach to determining the sample size required to adequately train a classifier. It is shown that sample size can be determined from three quantities: standardized fold change, class prevalence, and number of genes or features on the arrays. Numerous examples and important experimental design issues are discussed. The method is adapted to address ex post facto determination of whether the size of a training set used to develop a classifier was adequate. An interactive web site for performing the sample size calculations is provided. We showed that sample size calculations for classifier development from high-dimensional microarray data are feasible, discussed numerous important considerations, and presented examples.

  7. Generalized SAMPLE SIZE Determination Formulas for Investigating Contextual Effects by a Three-Level Random Intercept Model.

    PubMed

    Usami, Satoshi

    2017-03-01

    Behavioral and psychological researchers have shown strong interests in investigating contextual effects (i.e., the influences of combinations of individual- and group-level predictors on individual-level outcomes). The present research provides generalized formulas for determining the sample size needed in investigating contextual effects according to the desired level of statistical power as well as width of confidence interval. These formulas are derived within a three-level random intercept model that includes one predictor/contextual variable at each level to simultaneously cover various kinds of contextual effects that researchers can show interest. The relative influences of indices included in the formulas on the standard errors of contextual effects estimates are investigated with the aim of further simplifying sample size determination procedures. In addition, simulation studies are performed to investigate finite sample behavior of calculated statistical power, showing that estimated sample sizes based on derived formulas can be both positively and negatively biased due to complex effects of unreliability of contextual variables, multicollinearity, and violation of assumption regarding the known variances. Thus, it is advisable to compare estimated sample sizes under various specifications of indices and to evaluate its potential bias, as illustrated in the example.

  8. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

    PubMed Central

    Adnan, Tassha Hilda

    2016-01-01

    Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

  9. An opportunity cost approach to sample size calculation in cost-effectiveness analysis.

    PubMed

    Gafni, A; Walter, S D; Birch, S; Sendi, P

    2008-01-01

    The inclusion of economic evaluations as part of clinical trials has led to concerns about the adequacy of trial sample size to support such analysis. The analytical tool of cost-effectiveness analysis is the incremental cost-effectiveness ratio (ICER), which is compared with a threshold value (lambda) as a method to determine the efficiency of a health-care intervention. Accordingly, many of the methods suggested to calculating the sample size requirements for the economic component of clinical trials are based on the properties of the ICER. However, use of the ICER and a threshold value as a basis for determining efficiency has been shown to be inconsistent with the economic concept of opportunity cost. As a result, the validity of the ICER-based approaches to sample size calculations can be challenged. Alternative methods for determining improvements in efficiency have been presented in the literature that does not depend upon ICER values. In this paper, we develop an opportunity cost approach to calculating sample size for economic evaluations alongside clinical trials, and illustrate the approach using a numerical example. We compare the sample size requirement of the opportunity cost method with the ICER threshold method. In general, either method may yield the larger required sample size. However, the opportunity cost approach, although simple to use, has additional data requirements. We believe that the additional data requirements represent a small price to pay for being able to perform an analysis consistent with both concept of opportunity cost and the problem faced by decision makers. Copyright (c) 2007 John Wiley & Sons, Ltd.

  10. 40 CFR 1042.310 - Engine selection for Category 1 and Category 2 engines.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Category 2 engines. (a) Determine minimum sample sizes as follows: (1) For Category 1 engines, the minimum sample size is one engine or one percent of the projected U.S.-directed production volume for all your Category 1 engine families, whichever is greater. (2) For Category 2 engines, the minimum sample size is...

  11. Sample Size Requirements for Structural Equation Models: An Evaluation of Power, Bias, and Solution Propriety

    ERIC Educational Resources Information Center

    Wolf, Erika J.; Harrington, Kelly M.; Clark, Shaunna L.; Miller, Mark W.

    2013-01-01

    Determining sample size requirements for structural equation modeling (SEM) is a challenge often faced by investigators, peer reviewers, and grant writers. Recent years have seen a large increase in SEMs in the behavioral science literature, but consideration of sample size requirements for applied SEMs often relies on outdated rules-of-thumb.…

  12. Estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean.

    PubMed

    Schillaci, Michael A; Schillaci, Mario E

    2009-02-01

    The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (n<10) or very small (n < or = 5) sample sizes. This method can be used by researchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.

  13. Sample size for estimating mean and coefficient of variation in species of crotalarias.

    PubMed

    Toebe, Marcos; Machado, Letícia N; Tartaglia, Francieli L; Carvalho, Juliana O DE; Bandeira, Cirineu T; Cargnelutti Filho, Alberto

    2018-04-16

    The objective of this study was to determine the sample size necessary to estimate the mean and coefficient of variation in four species of crotalarias (C. juncea, C. spectabilis, C. breviflora and C. ochroleuca). An experiment was carried out for each species during the season 2014/15. At harvest, 1,000 pods of each species were randomly collected. In each pod were measured: mass of pod with and without seeds, length, width and height of pods, number and mass of seeds per pod, and mass of hundred seeds. Measures of central tendency, variability and distribution were calculated, and the normality was verified. The sample size necessary to estimate the mean and coefficient of variation with amplitudes of the confidence interval of 95% (ACI95%) of 2%, 4%, ..., 20% was determined by resampling with replacement. The sample size varies among species and characters, being necessary a larger sample size to estimate the mean in relation of the necessary for the coefficient of variation.

  14. A Bayesian approach for incorporating economic factors in sample size design for clinical trials of individual drugs and portfolios of drugs.

    PubMed

    Patel, Nitin R; Ankolekar, Suresh

    2007-11-30

    Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.

  15. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    PubMed

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  16. HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE

    NASA Technical Reports Server (NTRS)

    De, Salvo L. J.

    1994-01-01

    HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.

  17. Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.

    PubMed

    Youssef, Noha H; Elshahed, Mostafa S

    2008-09-01

    Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.

  18. Reexamining Sample Size Requirements for Multivariate, Abundance-Based Community Research: When Resources are Limited, the Research Does Not Have to Be.

    PubMed

    Forcino, Frank L; Leighton, Lindsey R; Twerdy, Pamela; Cahill, James F

    2015-01-01

    Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment.

  19. An alternative method for determining particle-size distribution of forest road aggregate and soil with large-sized particles

    Treesearch

    Hakjun Rhee; Randy B. Foltz; James L. Fridley; Finn Krogstad; Deborah S. Page-Dumroese

    2014-01-01

    Measurement of particle-size distribution (PSD) of soil with large-sized particles (e.g., 25.4 mm diameter) requires a large sample and numerous particle-size analyses (PSAs). A new method is needed that would reduce time, effort, and cost for PSAs of the soil and aggregate material with large-sized particles. We evaluated a nested method for sampling and PSA by...

  20. Approximate Sample Size Formulas for Testing Group Mean Differences when Variances Are Unequal in One-Way ANOVA

    ERIC Educational Resources Information Center

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2008-01-01

    This study proposes an approach for determining appropriate sample size for Welch's F test when unequal variances are expected. Given a certain maximum deviation in population means and using the quantile of F and t distributions, there is no need to specify a noncentrality parameter and it is easy to estimate the approximate sample size needed…

  1. An improved methodology of asymmetric flow field flow fractionation hyphenated with inductively coupled mass spectrometry for the determination of size distribution of gold nanoparticles in dietary supplements.

    PubMed

    Mudalige, Thilak K; Qu, Haiou; Linder, Sean W

    2015-11-13

    Engineered nanoparticles are available in large numbers of commercial products claiming various health benefits. Nanoparticle absorption, distribution, metabolism, excretion, and toxicity in a biological system are dependent on particle size, thus the determination of size and size distribution is essential for full characterization. Number based average size and size distribution is a major parameter for full characterization of the nanoparticle. In the case of polydispersed samples, large numbers of particles are needed to obtain accurate size distribution data. Herein, we report a rapid methodology, demonstrating improved nanoparticle recovery and excellent size resolution, for the characterization of gold nanoparticles in dietary supplements using asymmetric flow field flow fractionation coupled with visible absorption spectrometry and inductively coupled plasma mass spectrometry. A linear relationship between gold nanoparticle size and retention times was observed, and used for characterization of unknown samples. The particle size results from unknown samples were compared to results from traditional size analysis by transmission electron microscopy, and found to have less than a 5% deviation in size for unknown product over the size range from 7 to 30 nm. Published by Elsevier B.V.

  2. "PowerUp"!: A Tool for Calculating Minimum Detectable Effect Sizes and Minimum Required Sample Sizes for Experimental and Quasi-Experimental Design Studies

    ERIC Educational Resources Information Center

    Dong, Nianbo; Maynard, Rebecca

    2013-01-01

    This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…

  3. Guidelines for sampling aboveground biomass and carbon in mature central hardwood forests

    Treesearch

    Martin A. Spetich; Stephen R. Shifley

    2017-01-01

    As impacts of climate change expand, determining accurate measures of forest biomass and associated carbon storage in forests is critical. We present sampling guidance for 12 combinations of percent error, plot size, and alpha levels by disturbance regime to help determine the optimal size of plots to estimate aboveground biomass and carbon in an old-growth Central...

  4. Field test comparison of an autocorrelation technique for determining grain size using a digital 'beachball' camera versus traditional methods

    USGS Publications Warehouse

    Barnard, P.L.; Rubin, D.M.; Harney, J.; Mustain, N.

    2007-01-01

    This extensive field test of an autocorrelation technique for determining grain size from digital images was conducted using a digital bed-sediment camera, or 'beachball' camera. Using 205 sediment samples and >1200 images from a variety of beaches on the west coast of the US, grain size ranging from sand to granules was measured from field samples using both the autocorrelation technique developed by Rubin [Rubin, D.M., 2004. A simple autocorrelation algorithm for determining grain size from digital images of sediment. Journal of Sedimentary Research, 74(1): 160-165.] and traditional methods (i.e. settling tube analysis, sieving, and point counts). To test the accuracy of the digital-image grain size algorithm, we compared results with manual point counts of an extensive image data set in the Santa Barbara littoral cell. Grain sizes calculated using the autocorrelation algorithm were highly correlated with the point counts of the same images (r2 = 0.93; n = 79) and had an error of only 1%. Comparisons of calculated grain sizes and grain sizes measured from grab samples demonstrated that the autocorrelation technique works well on high-energy dissipative beaches with well-sorted sediment such as in the Pacific Northwest (r2 ??? 0.92; n = 115). On less dissipative, more poorly sorted beaches such as Ocean Beach in San Francisco, results were not as good (r2 ??? 0.70; n = 67; within 3% accuracy). Because the algorithm works well compared with point counts of the same image, the poorer correlation with grab samples must be a result of actual spatial and vertical variability of sediment in the field; closer agreement between grain size in the images and grain size of grab samples can be achieved by increasing the sampling volume of the images (taking more images, distributed over a volume comparable to that of a grab sample). In all field tests the autocorrelation method was able to predict the mean and median grain size with ???96% accuracy, which is more than adequate for the majority of sedimentological applications, especially considering that the autocorrelation technique is estimated to be at least 100 times faster than traditional methods.

  5. Determination of Minimum Training Sample Size for Microarray-Based Cancer Outcome Prediction–An Empirical Assessment

    PubMed Central

    Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu

    2013-01-01

    The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920

  6. Accounting for between-study variation in incremental net benefit in value of information methodology.

    PubMed

    Willan, Andrew R; Eckermann, Simon

    2012-10-01

    Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.

  7. 75 FR 48815 - Medicaid Program and Children's Health Insurance Program (CHIP); Revisions to the Medicaid...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-11

    ... size may be reduced by the finite population correction factor. The finite population correction is a statistical formula utilized to determine sample size where the population is considered finite rather than... program may notify us and the annual sample size will be reduced by the finite population correction...

  8. Sampling strategies for radio-tracking coyotes

    USGS Publications Warehouse

    Smith, G.J.; Cary, J.R.; Rongstad, O.J.

    1981-01-01

    Ten coyotes radio-tracked for 24 h periods were most active at night and moved little during daylight hours. Home-range size determined from radio-locations of 3 adult coyotes increased with the number of locations until an asymptote was reached at about 35-40 independent day locations or 3 6 nights of hourly radio-locations. Activity of the coyote did not affect the asymptotic nature of the home-range calculations, but home-range sizes determined from more than 3 nights of hourly locations were considerably larger than home-range sizes determined from daylight locations. Coyote home-range sizes were calculated from daylight locations, full-night tracking periods, and half-night tracking periods. Full- and half-lnight sampling strategies involved obtaining hourly radio-locations during 12 and 6 h periods, respectively. The half-night sampling strategy was the best compromise for our needs, as it adequately indexed the home-range size, reduced time and energy spent, and standardized the area calculation without requiring the researcher to become completely nocturnal. Sight tracking also provided information about coyote activity and sociability.

  9. The feasibility of using explicit method for linear correction of the particle size variation using NIR Spectroscopy combined with PLS2regression method

    NASA Astrophysics Data System (ADS)

    Yulia, M.; Suhandy, D.

    2018-03-01

    NIR spectra obtained from spectral data acquisition system contains both chemical information of samples as well as physical information of the samples, such as particle size and bulk density. Several methods have been established for developing calibration models that can compensate for sample physical information variations. One common approach is to include physical information variation in the calibration model both explicitly and implicitly. The objective of this study was to evaluate the feasibility of using explicit method to compensate the influence of different particle size of coffee powder in NIR calibration model performance. A number of 220 coffee powder samples with two different types of coffee (civet and non-civet) and two different particle sizes (212 and 500 µm) were prepared. Spectral data was acquired using NIR spectrometer equipped with an integrating sphere for diffuse reflectance measurement. A discrimination method based on PLS-DA was conducted and the influence of different particle size on the performance of PLS-DA was investigated. In explicit method, we add directly the particle size as predicted variable results in an X block containing only the NIR spectra and a Y block containing the particle size and type of coffee. The explicit inclusion of the particle size into the calibration model is expected to improve the accuracy of type of coffee determination. The result shows that using explicit method the quality of the developed calibration model for type of coffee determination is a little bit superior with coefficient of determination (R2) = 0.99 and root mean square error of cross-validation (RMSECV) = 0.041. The performance of the PLS2 calibration model for type of coffee determination with particle size compensation was quite good and able to predict the type of coffee in two different particle sizes with relatively high R2 pred values. The prediction also resulted in low bias and RMSEP values.

  10. Considerations for throughfall chemistry sample-size determination

    Treesearch

    Pamela J. Edwards; Paul Mohai; Howard G. Halverson; David R. DeWalle

    1989-01-01

    Both the number of trees sampled per species and the number of sampling points under each tree are important throughfall sampling considerations. Chemical loadings obtained from an urban throughfall study were used to evaluate the relative importance of both of these sampling factors in tests for determining species' differences. Power curves for detecting...

  11. The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.

    PubMed

    Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S

    2016-10-01

    The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.

  12. Determination of sample size for higher volatile data using new framework of Box-Jenkins model with GARCH: A case study on gold price

    NASA Astrophysics Data System (ADS)

    Roslindar Yaziz, Siti; Zakaria, Roslinazairimah; Hura Ahmad, Maizah

    2017-09-01

    The model of Box-Jenkins - GARCH has been shown to be a promising tool for forecasting higher volatile time series. In this study, the framework of determining the optimal sample size using Box-Jenkins model with GARCH is proposed for practical application in analysing and forecasting higher volatile data. The proposed framework is employed to daily world gold price series from year 1971 to 2013. The data is divided into 12 different sample sizes (from 30 to 10200). Each sample is tested using different combination of the hybrid Box-Jenkins - GARCH model. Our study shows that the optimal sample size to forecast gold price using the framework of the hybrid model is 1250 data of 5-year sample. Hence, the empirical results of model selection criteria and 1-step-ahead forecasting evaluations suggest that the latest 12.25% (5-year data) of 10200 data is sufficient enough to be employed in the model of Box-Jenkins - GARCH with similar forecasting performance as by using 41-year data.

  13. An anthropometric analysis of Korean male helicopter pilots for helicopter cockpit design.

    PubMed

    Lee, Wonsup; Jung, Kihyo; Jeong, Jeongrim; Park, Jangwoon; Cho, Jayoung; Kim, Heeeun; Park, Seikwon; You, Heecheon

    2013-01-01

    This study measured 21 anthropometric dimensions (ADs) of 94 Korean male helicopter pilots in their 20s to 40s and compared them with corresponding measurements of Korean male civilians and the US Army male personnel. The ADs and the sample size of the anthropometric survey were determined by a four-step process: (1) selection of ADs related to helicopter cockpit design, (2) evaluation of the importance of each AD, (3) calculation of required sample sizes for selected precision levels and (4) determination of an appropriate sample size by considering both the AD importance evaluation results and the sample size requirements. The anthropometric comparison reveals that the Korean helicopter pilots are larger (ratio of means = 1.01-1.08) and less dispersed (ratio of standard deviations = 0.71-0.93) than the Korean male civilians and that they are shorter in stature (0.99), have shorter upper limbs (0.89-0.96) and lower limbs (0.93-0.97), but are taller on sitting height, sitting eye height and acromial height (1.01-1.03), and less dispersed (0.68-0.97) than the US Army personnel. The anthropometric characteristics of Korean male helicopter pilots were compared with those of Korean male civilians and US Army male personnel. The sample size determination process and the anthropometric comparison results presented in this study are useful to design an anthropometric survey and a helicopter cockpit layout, respectively.

  14. Laboratory theory and methods for sediment analysis

    USGS Publications Warehouse

    Guy, Harold P.

    1969-01-01

    The diverse character of fluvial sediments makes the choice of laboratory analysis somewhat arbitrary and the pressing of sediment samples difficult. This report presents some theories and methods used by the Water Resources Division for analysis of fluvial sediments to determine the concentration of suspended-sediment samples and the particle-size distribution of both suspended-sediment and bed-material samples. Other analyses related to these determinations may include particle shape, mineral content, and specific gravity, the organic matter and dissolved solids of samples, and the specific weight of soils. The merits and techniques of both the evaporation and filtration methods for concentration analysis are discussed. Methods used for particle-size analysis of suspended-sediment samples may include the sieve pipet, the VA tube-pipet, or the BW tube-VA tube depending on the equipment available, the concentration and approximate size of sediment in the sample, and the settling medium used. The choice of method for most bed-material samples is usually limited to procedures suitable for sand or to some type of visual analysis for large sizes. Several tested forms are presented to help insure a well-ordered system in the laboratory to handle the samples, to help determine the kind of analysis required for each, to conduct the required processes, and to assist in the required computations. Use of the manual should further 'standardize' methods of fluvial sediment analysis among the many laboratories and thereby help to achieve uniformity and precision of the data.

  15. Sample size considerations when groups are the appropriate unit of analyses

    PubMed Central

    Sadler, Georgia Robins; Ko, Celine Marie; Alisangco, Jennifer; Rosbrook, Bradley P.; Miller, Eric; Fullerton, Judith

    2007-01-01

    This paper discusses issues to be considered by nurse researchers when groups should be used as a unit of randomization. Advantages and disadvantages are presented, with statistical calculations needed to determine effective sample size. Examples of these concepts are presented using data from the Black Cosmetologists Promoting Health Program. Different hypothetical scenarios and their impact on sample size are presented. Given the complexity of calculating sample size when using groups as a unit of randomization, it’s advantageous for researchers to work closely with statisticians when designing and implementing studies that anticipate the use of groups as the unit of randomization. PMID:17693219

  16. Visual accumulation tube for size analysis of sands

    USGS Publications Warehouse

    Colby, B.C.; Christensen, R.P.

    1956-01-01

    The visual-accumulation-tube method was developed primarily for making size analyses of the sand fractions of suspended-sediment and bed-material samples. Because the fundamental property governing the motion of a sediment particle in a fluid is believed to be its fall velocity. the analysis is designed to determine the fall-velocity-frequency distribution of the individual particles of the sample. The analysis is based on a stratified sedimentation system in which the sample is introduced at the top of a transparent settling tube containing distilled water. The procedure involves the direct visual tracing of the height of sediment accumulation in a contracted section at the bottom of the tube. A pen records the height on a moving chart. The method is simple and fast, provides a continuous and permanent record, gives highly reproducible results, and accurately determines the fall-velocity characteristics of the sample. The apparatus, procedure, results, and accuracy of the visual-accumulation-tube method for determining the sedimentation-size distribution of sands are presented in this paper.

  17. 10 CFR Appendix B to Subpart F of... - Sampling Plan For Enforcement Testing

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... sample as follows: ER18MR98.010 where (x 1) is the measured energy efficiency, energy or water (in the...-tailed probability level and a sample size of n 1. Step 6(a). For an Energy Efficiency Standard, compare... an Energy Efficiency Standard, determine the second sample size (n 2) as follows: ER18MR98.015 where...

  18. Using sieving and pretreatment to separate plastics during end-of-life vehicle recycling.

    PubMed

    Stagner, Jacqueline A; Sagan, Barsha; Tam, Edwin Kl

    2013-09-01

    Plastics continue to be a challenge for recovering materials at the end-of-life for vehicles. However, it may be possible to improve the recovery of plastics by exploiting material characteristics, such as shape, or by altering their behavior, such as through temperature changes, in relation to recovery processes and handling. Samples of a 2009 Dodge Challenger front fascia were shredded in a laboratory-scale hammer mill shredder. A 2 × 2 factorial design study was performed to determine the effect of sample shape (flat versus curved) and sample temperature (room temperature versus cryogenic temperature) on the size of the particles exiting from the shredder. It was determined that sample shape does not affect the particle size; however, sample temperature does affect the particle size. At cryogenic temperatures, the distribution of particle sizes is much narrower than at room temperature. Having a more uniform particle size could make recovery of plastic particles, such as these more efficient during the recycling of end-of-life vehicles. Samples of Chrysler minivan headlights were also shredded at room temperature and at cryogenic temperatures. The size of the particles of the two different plastics in the headlights is statistically different both at room temperature and at cryogenic temperature, and the particles are distributed narrowly. The research suggests that incremental changes in end-of-life vehicle processing could be effective in aiding materials recovery.

  19. Comparison of particle sizes between 238PuO 2 before aqueous processing, after aqueous processing, and after ball milling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mulford, Roberta Nancy

    Particle sizes determined for a single lot of incoming Russian fuel and for a lot of fuel after aqueous processing are compared with particle sizes measured on fuel after ball-milling. The single samples of each type are believed to have particle size distributions typical of oxide from similar lots, as the processing of fuel lots is fairly uniform. Variation between lots is, as yet, uncharacterized. Sampling and particle size measurement methods are discussed elsewhere.

  20. Accounting for twin births in sample size calculations for randomised trials.

    PubMed

    Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J

    2018-05-04

    Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.

  1. Thermal conductivity measurements of particulate materials: 3. Natural samples and mixtures of particle sizes

    NASA Astrophysics Data System (ADS)

    Presley, Marsha A.; Craddock, Robert A.

    2006-09-01

    A line-heat source apparatus was used to measure thermal conductivities of natural fluvial and eolian particulate sediments under low pressures of a carbon dioxide atmosphere. These measurements were compared to a previous compilation of the dependence of thermal conductivity on particle size to determine a thermal conductivity-derived particle size for each sample. Actual particle-size distributions were determined via physical separation through brass sieves. Comparison of the two analyses indicates that the thermal conductivity reflects the larger particles within the samples. In each sample at least 85-95% of the particles by weight are smaller than or equal to the thermal conductivity-derived particle size. At atmospheric pressures less than about 2-3 torr, samples that contain a large amount of small particles (<=125 μm or 4 Φ) exhibit lower thermal conductivities relative to those for the larger particles within the sample. Nonetheless, 90% of the sample by weight still consists of particles that are smaller than or equal to this lower thermal conductivity-derived particle size. These results allow further refinement in the interpretation of geomorphologic processes acting on the Martian surface. High-energy fluvial environments should produce poorer-sorted and coarser-grained deposits than lower energy eolian environments. Hence these results will provide additional information that may help identify coarser-grained fluvial deposits and may help differentiate whether channel dunes are original fluvial sediments that are at most reworked by wind or whether they represent a later overprint of sediment with a separate origin.

  2. Determining chewing efficiency using a solid test food and considering all phases of mastication.

    PubMed

    Liu, Ting; Wang, Xinmiao; Chen, Jianshe; van der Glas, Hilbert W

    2018-07-01

    Following chewing a solid food, the median particle size, X 50 , is determined after N chewing cycles, by curve-fitting of the particle size distribution. Reduction of X 50 with N is traditionally followed from N ≥ 15-20 cycles when using the artificial test food Optosil ® , because of initially unreliable values of X 50 . The aims of the study were (i) to enable testing at small N-values by using initial particles of appropriate size, shape and amount, and (ii) to compare measures of chewing ability, i.e. chewing efficiency (N needed to halve the initial particle size, N(1/2-Xo)) and chewing performance (X 50 at a particular N-value, X 50,N ). 8 subjects with a natural dentition chewed 4 types of samples of Optosil particles: (1) 8 cubes of 8 mm, border size relative to bin size (traditional test), (2) 9 half-cubes of 9.6 mm, mid-size; similar sample volume, (3) 4 half-cubes of 9.6 mm, and 2 half-cubes of 9.6 mm; reduced particle number and sample volume. All samples were tested with 4 N-values. Curve-fitting with a 2nd order polynomial function yielded log(X 50 )-log(N) relationships, after which N(1/2-Xo) and X 50,N were obtained. Reliable X 50 -values are obtained for all N-values when using half-cubes with a mid-size relative to bin sizes. By using 2 or 4 half-cubes, determination of N(1/2-Xo) or X 50,N needs less chewing cycles than traditionally. Chewing efficiency is preferable over chewing performance because of a comparison of inter-subject chewing ability at the same stage of food comminution and constant intra-subject and inter-subject ratios between and within samples respectively. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Two models of the sound-signal frequency dependence on the animal body size as exemplified by the ground squirrels of Eurasia (mammalia, rodentia).

    PubMed

    Nikol'skii, A A

    2017-11-01

    Dependence of the sound-signal frequency on the animal body length was studied in 14 ground squirrel species (genus Spermophilus) of Eurasia. Regression analysis of the total sample yielded a low determination coefficient (R 2 = 26%), because the total sample proved to be heterogeneous in terms of signal frequency within the dimension classes of animals. When the total sample was divided into two groups according to signal frequency, two statistically significant models (regression equations) were obtained in which signal frequency depended on the body size at high determination coefficients (R 2 = 73 and 94% versus 26% for the total sample). Thus, the problem of correlation between animal body size and the frequency of their vocal signals does not have a unique solution.

  4. Sample size allocation for food item radiation monitoring and safety inspection.

    PubMed

    Seto, Mayumi; Uriu, Koichiro

    2015-03-01

    The objective of this study is to identify a procedure for determining sample size allocation for food radiation inspections of more than one food item to minimize the potential risk to consumers of internal radiation exposure. We consider a simplified case of food radiation monitoring and safety inspection in which a risk manager is required to monitor two food items, milk and spinach, in a contaminated area. Three protocols for food radiation monitoring with different sample size allocations were assessed by simulating random sampling and inspections of milk and spinach in a conceptual monitoring site. Distributions of (131)I and radiocesium concentrations were determined in reference to (131)I and radiocesium concentrations detected in Fukushima prefecture, Japan, for March and April 2011. The results of the simulations suggested that a protocol that allocates sample size to milk and spinach based on the estimation of (131)I and radiocesium concentrations using the apparent decay rate constants sequentially calculated from past monitoring data can most effectively minimize the potential risks of internal radiation exposure. © 2014 Society for Risk Analysis.

  5. Sample size calculations for randomized clinical trials published in anesthesiology journals: a comparison of 2010 versus 2016.

    PubMed

    Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M

    2018-06-01

    Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.

  6. Influences of Co doping on the structural and optical properties of ZnO nanostructured

    NASA Astrophysics Data System (ADS)

    Majeed Khan, M. A.; Wasi Khan, M.; Alhoshan, Mansour; Alsalhi, M. S.; Aldwayyan, A. S.

    2010-07-01

    Pure and Co-doped ZnO nanostructured samples have been synthesized by a chemical route. We have studied the structural and optical properties of the samples by using X-ray diffraction (XRD), field-emission scanning electron microscopy (FESEM), field-emission transmission electron microscope (FETEM), energy-dispersive X-ray (EDX) analysis and UV-VIS spectroscopy. The XRD patterns show that all the samples are hexagonal wurtzite structures. Changes in crystallite size due to mechanical activation were also determined from X-ray measurements. These results were correlated with changes in particle size followed by SEM and TEM. The average crystallite sizes obtained from XRD were between 20 to 25 nm. The TEM images showed the average particle size of undoped ZnO nanostructure was about 20 nm whereas the smallest average grain size at 3% Co was about 15 nm. Optical parameters such as absorption coefficient ( α), energy band gap ( E g ), the refractive index ( n), and dielectric constants ( σ) have been determined using different methods.

  7. The choice of sample size: a mixed Bayesian / frequentist approach.

    PubMed

    Pezeshk, Hamid; Nematollahi, Nader; Maroufy, Vahed; Gittins, John

    2009-04-01

    Sample size computations are largely based on frequentist or classical methods. In the Bayesian approach the prior information on the unknown parameters is taken into account. In this work we consider a fully Bayesian approach to the sample size determination problem which was introduced by Grundy et al. and developed by Lindley. This approach treats the problem as a decision problem and employs a utility function to find the optimal sample size of a trial. Furthermore, we assume that a regulatory authority, which is deciding on whether or not to grant a licence to a new treatment, uses a frequentist approach. We then find the optimal sample size for the trial by maximising the expected net benefit, which is the expected benefit of subsequent use of the new treatment minus the cost of the trial.

  8. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Evaluations of the Method to Measure Black Carbon Particles Suspended in Rainwater and Snow Samples

    NASA Astrophysics Data System (ADS)

    Ohata, S.; Moteki, N.; Schwarz, J. P.; Fahey, D. W.; Kondo, Y.

    2012-12-01

    The mass concentrations and size distributions of black carbon (BC) particles in rainwater and snow are important parameters for improved understanding of the wet deposition of BC, is a key process in quantifying the impacts of BC on climate. In this study, we have evaluated a new method to measure these parameters. The approach consists of an ultrasonic nebulizer (USN) used in conjunction with a Single Particle Soot Photometer (SP2). The USN converts sample water into micron-size droplets at a constant rate and then extracts airborne BC particles by dehydrating the water droplets. The mass of individual BC particles is measured by the SP2, based on the laser-induced incandescence technique. The combination of the USN and SP2 enabled the measurement of BC particles using only small amount of sample water, typically 10 ml (Ohata et al., 2011). However, the loss of BC during the extraction process depends on their size. We determined the size-dependent extraction efficiency using polystyrene latex spheres (PSLs) with twelve different diameters between 100-1050 nm. The PSL concentrations in water were determined by the light extinction of at 532nm. The extraction efficiency of the USN showed broad maximum in the diameter range of 200-500nm, and decreased substantially at larger sizes. The extraction efficiency determined using the PSL standards agreed to within ±40% with that determined using laboratory-generated BC concentration standards. We applied this method to the analysis of rainwater collected in Tokyo and Okinawa over the East China Sea. Measured BC size distributions in all rainwater samples showed negligible contribution of the BC particles larger than 600nm to the total BC amounts. However, for BC particles in surface snow collected in Greenland and Antarctica, size distributions were sometimes shifted to much larger size ranges.

  10. Geostatistics and the representative elementary volume of gamma ray tomography attenuation in rocks cores

    USGS Publications Warehouse

    Vogel, J.R.; Brown, G.O.

    2003-01-01

    Semivariograms of samples of Culebra Dolomite have been determined at two different resolutions for gamma ray computed tomography images. By fitting models to semivariograms, small-scale and large-scale correlation lengths are determined for four samples. Different semivariogram parameters were found for adjacent cores at both resolutions. Relative elementary volume (REV) concepts are related to the stationarity of the sample. A scale disparity factor is defined and is used to determine sample size required for ergodic stationarity with a specified correlation length. This allows for comparison of geostatistical measures and representative elementary volumes. The modifiable areal unit problem is also addressed and used to determine resolution effects on correlation lengths. By changing resolution, a range of correlation lengths can be determined for the same sample. Comparison of voxel volume to the best-fit model correlation length of a single sample at different resolutions reveals a linear scaling effect. Using this relationship, the range of the point value semivariogram is determined. This is the range approached as the voxel size goes to zero. Finally, these results are compared to the regularization theory of point variables for borehole cores and are found to be a better fit for predicting the volume-averaged range.

  11. A note on sample size calculation for mean comparisons based on noncentral t-statistics.

    PubMed

    Chow, Shein-Chung; Shao, Jun; Wang, Hansheng

    2002-11-01

    One-sample and two-sample t-tests are commonly used in analyzing data from clinical trials in comparing mean responses from two drug products. During the planning stage of a clinical study, a crucial step is the sample size calculation, i.e., the determination of the number of subjects (patients) needed to achieve a desired power (e.g., 80%) for detecting a clinically meaningful difference in the mean drug responses. Based on noncentral t-distributions, we derive some sample size calculation formulas for testing equality, testing therapeutic noninferiority/superiority, and testing therapeutic equivalence, under the popular one-sample design, two-sample parallel design, and two-sample crossover design. Useful tables are constructed and some examples are given for illustration.

  12. The importance of plot size and the number of sampling seasons on capturing macrofungal species richness.

    PubMed

    Li, Huili; Ostermann, Anne; Karunarathna, Samantha C; Xu, Jianchu; Hyde, Kevin D; Mortimer, Peter E

    2018-07-01

    The species-area relationship is an important factor in the study of species diversity, conservation biology, and landscape ecology. A deeper understanding of this relationship is necessary, in order to provide recommendations on how to improve the quality of data collection on macrofungal diversity in different land use systems in future studies, a systematic assessment of methodological parameters, in particular optimal plot sizes. The species-area relationship of macrofungi in tropical and temperate climatic zones and four different land use systems were investigated by determining the macrofungal species richness in plot sizes ranging from 100 m 2 to 10 000 m 2 over two sampling seasons. We found that the effect of plot size on recorded species richness significantly differed between land use systems with the exception of monoculture systems. For both climate zones, land use system needs to be considered when determining optimal plot size. Using an optimal plot size was more important than temporal replication (over two sampling seasons) in accurately recording species richness. Copyright © 2018 British Mycological Society. Published by Elsevier Ltd. All rights reserved.

  13. Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review.

    PubMed

    McKeown, Andrew; Gewandter, Jennifer S; McDermott, Michael P; Pawlowski, Joseph R; Poli, Joseph J; Rothstein, Daniel; Farrar, John T; Gilron, Ian; Katz, Nathaniel P; Lin, Allison H; Rappaport, Bob A; Rowbotham, Michael C; Turk, Dennis C; Dworkin, Robert H; Smith, Shannon M

    2015-03-01

    Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. Copyright © 2015 American Pain Society. All rights reserved.

  14. Effect of sample inhomogeneity in KAr dating

    USGS Publications Warehouse

    Engels, J.C.; Ingamells, C.O.

    1970-01-01

    Error in K-Ar ages is often due more to deficiencies in the splitting process, whereby portions of the sample are taken for potassium and for argon determination, than to imprecision in the analytical methods. The effect of the grain size of a sample and of the composition of a contaminating mineral can be evaluated, and this provides a useful guide in attempts to minimize error. Rocks and minerals should be prepared for age determination with the effects of contaminants and grain size in mind. The magnitude of such effects can be much larger than intuitive estimates might indicate. ?? 1970.

  15. A simple autocorrelation algorithm for determining grain size from digital images of sediment

    USGS Publications Warehouse

    Rubin, D.M.

    2004-01-01

    Autocorrelation between pixels in digital images of sediment can be used to measure average grain size of sediment on the bed, grain-size distribution of bed sediment, and vertical profiles in grain size in a cross-sectional image through a bed. The technique is less sensitive than traditional laboratory analyses to tails of a grain-size distribution, but it offers substantial other advantages: it is 100 times as fast; it is ideal for sampling surficial sediment (the part that interacts with a flow); it can determine vertical profiles in grain size on a scale finer than can be sampled physically; and it can be used in the field to provide almost real-time grain-size analysis. The technique can be applied to digital images obtained using any source with sufficient resolution, including digital cameras, digital video, or underwater digital microscopes (for real-time grain-size mapping of the bed). ?? 2004, SEPM (Society for Sedimentary Geology).

  16. Concentrations of selected constituents in surface-water and streambed-sediment samples collected from streams in and near an area of oil and natural-gas development, south-central Texas, 2011-13

    USGS Publications Warehouse

    Opsahl, Stephen P.; Crow, Cassi L.

    2014-01-01

    During collection of streambed-sediment samples, additional samples from a subset of three sites (the SAR Elmendorf, SAR 72, and SAR McFaddin sites) were processed by using a 63-µm sieve on one aliquot and a 2-mm sieve on a second aliquot for PAH and n-alkane analyses. The purpose of analyzing PAHs and n-alkanes on a sample containing sand, silt, and clay versus a sample containing only silt and clay was to provide data that could be used to determine if these organic constituents had a greater affinity for silt- and clay-sized particles relative to sand-sized particles. The greater concentrations of PAHs in the <63-μm size-fraction samples at all three of these sites are consistent with a greater percentage of binding sites associated with fine-grained (<63 μm) sediment versus coarse-grained (<2 mm) sediment. The larger difference in total PAHs between the <2-mm and <63-μm size-fraction samples at the SAR Elmendorf site might be related to the large percentage of sand in the <2-mm size-fraction sample which was absent in the <63-μm size-fraction sample. In contrast, the <2-mm size-fraction sample collected from the SAR McFaddin site contained very little sand and was similar in particle-size composition to the <63-μm size-fraction sample.

  17. Application of SAXS and SANS in evaluation of porosity, pore size distribution and surface area of coal

    USGS Publications Warehouse

    Radlinski, A.P.; Mastalerz, Maria; Hinde, A.L.; Hainbuchner, M.; Rauch, H.; Baron, M.; Lin, J.S.; Fan, L.; Thiyagarajan, P.

    2004-01-01

    This paper discusses the applicability of small angle X-ray scattering (SAXS) and small angle neutron scattering (SANS) techniques for determining the porosity, pore size distribution and internal specific surface area in coals. The method is noninvasive, fast, inexpensive and does not require complex sample preparation. It uses coal grains of about 0.8 mm size mounted in standard pellets as used for petrographic studies. Assuming spherical pore geometry, the scattering data are converted into the pore size distribution in the size range 1 nm (10 A??) to 20 ??m (200,000 A??) in diameter, accounting for both open and closed pores. FTIR as well as SAXS and SANS data for seven samples of oriented whole coals and corresponding pellets with vitrinite reflectance (Ro) values in the range 0.55% to 5.15% are presented and analyzed. Our results demonstrate that pellets adequately represent the average microstructure of coal samples. The scattering data have been used to calculate the maximum surface area available for methane adsorption. Total porosity as percentage of sample volume is calculated and compared with worldwide trends. By demonstrating the applicability of SAXS and SANS techniques to determine the porosity, pore size distribution and surface area in coals, we provide a new and efficient tool, which can be used for any type of coal sample, from a thin slice to a representative sample of a thick seam. ?? 2004 Elsevier B.V. All rights reserved.

  18. Sample size determination for disease prevalence studies with partially validated data.

    PubMed

    Qiu, Shi-Fang; Poon, Wai-Yin; Tang, Man-Lai

    2016-02-01

    Disease prevalence is an important topic in medical research, and its study is based on data that are obtained by classifying subjects according to whether a disease has been contracted. Classification can be conducted with high-cost gold standard tests or low-cost screening tests, but the latter are subject to the misclassification of subjects. As a compromise between the two, many research studies use partially validated datasets in which all data points are classified by fallible tests, and some of the data points are validated in the sense that they are also classified by the completely accurate gold-standard test. In this article, we investigate the determination of sample sizes for disease prevalence studies with partially validated data. We use two approaches. The first is to find sample sizes that can achieve a pre-specified power of a statistical test at a chosen significance level, and the second is to find sample sizes that can control the width of a confidence interval with a pre-specified confidence level. Empirical studies have been conducted to demonstrate the performance of various testing procedures with the proposed sample sizes. The applicability of the proposed methods are illustrated by a real-data example. © The Author(s) 2012.

  19. Accounting for treatment by center interaction in sample size determinations and the use of surrogate outcomes in the pessary for the prevention of preterm birth trial: a simulation study.

    PubMed

    Willan, Andrew R

    2016-07-05

    The Pessary for the Prevention of Preterm Birth Study (PS3) is an international, multicenter, randomized clinical trial designed to examine the effectiveness of the Arabin pessary in preventing preterm birth in pregnant women with a short cervix. During the design of the study two methodological issues regarding power and sample size were raised. Since treatment in the Standard Arm will vary between centers, it is anticipated that so too will the probability of preterm birth in that arm. This will likely result in a treatment by center interaction, and the issue of how this will affect the sample size requirements was raised. The sample size requirements to examine the effect of the pessary on the baby's clinical outcome was prohibitively high, so the second issue is how best to examine the effect on clinical outcome. The approaches taken to address these issues are presented. Simulation and sensitivity analysis were used to address the sample size issue. The probability of preterm birth in the Standard Arm was assumed to vary between centers following a Beta distribution with a mean of 0.3 and a coefficient of variation of 0.3. To address the second issue a Bayesian decision model is proposed that combines the information regarding the between-treatment difference in the probability of preterm birth from PS3 with the data from the Multiple Courses of Antenatal Corticosteroids for Preterm Birth Study that relate preterm birth and perinatal mortality/morbidity. The approach provides a between-treatment comparison with respect to the probability of a bad clinical outcome. The performance of the approach was assessed using simulation and sensitivity analysis. Accounting for a possible treatment by center interaction increased the sample size from 540 to 700 patients per arm for the base case. The sample size requirements increase with the coefficient of variation and decrease with the number of centers. Under the same assumptions used for determining the sample size requirements, the simulated mean probability that pessary reduces the risk of perinatal mortality/morbidity is 0.98. The simulated mean decreased with coefficient of variation and increased with the number of clinical sites. Employing simulation and sensitivity analysis is a useful approach for determining sample size requirements while accounting for the additional uncertainty due to a treatment by center interaction. Using a surrogate outcome in conjunction with a Bayesian decision model is an efficient way to compare important clinical outcomes in a randomized clinical trial in situations where the direct approach requires a prohibitively high sample size.

  20. Estimating the breeding population of long-billed curlew in the United States

    USGS Publications Warehouse

    Stanley, T.R.; Skagen, S.K.

    2007-01-01

    Determining population size and long-term trends in population size for species of high concern is a priority of international, national, and regional conservation plans. Long-billed curlews (Numenius americanus) are a species of special concern in North America due to apparent declines in their population. Because long-billed curlews are not adequately monitored by existing programs, we undertook a 2-year study with the goals of 1) determining present long-billed curlew distribution and breeding population size in the United States and 2) providing recommendations for a long-term long-billed curlew monitoring protocol. We selected a stratified random sample of survey routes in 16 western states for sampling in 2004 and 2005, and we analyzed count data from these routes to estimate detection probabilities and abundance. In addition, we evaluated habitat along roadsides to determine how well roadsides represented habitat throughout the sampling units. We estimated there were 164,515 (SE = 42,047) breeding long-billed curlews in 2004, and 109,533 (SE = 31,060) breeding individuals in 2005. These estimates far exceed currently accepted estimates based on expert opinion. We found that habitat along roadsides was representative of long-billed curlew habitat in general. We make recommendations for improving sampling methodology, and we present power curves to provide guidance on minimum sample sizes required to detect trends in abundance.

  1. Estuarine sediment toxicity tests on diatoms: Sensitivity comparison for three species

    NASA Astrophysics Data System (ADS)

    Moreno-Garrido, Ignacio; Lubián, Luis M.; Jiménez, Begoña; Soares, Amadeu M. V. M.; Blasco, Julián

    2007-01-01

    Experimental populations of three marine and estuarine diatoms were exposed to sediments with different levels of pollutants, collected from the Aveiro Lagoon (NW of Portugal). The species selected were Cylindrotheca closterium, Phaeodactylum tricornutum and Navicula sp. Previous experiments were designed to determine the influence of the sediment particle size distribution on growth of the species assayed. Percentage of silt-sized sediment affect to growth of the selected species in the experimental conditions: the higher percentage of silt-sized sediment, the lower growth. In any case, percentages of silt-sized sediment less than 10% did not affect growth. In general, C. closterium seems to be slightly more sensitive to the selected sediments than the other two species. Two groups of sediment samples were determined as a function of the general response of the exposed microalgal populations: three of the six samples used were more toxic than the other three. Chemical analysis of the samples was carried out in order to determine the specific cause of differences in toxicity. After a statistical analysis, concentrations of Sn, Zn, Hg, Cu and Cr (among all physico-chemical analyzed parameters), in order of importance, were the most important factors that divided the two groups of samples (more and less toxic samples). Benthic diatoms seem to be sensitive organisms in sediment toxicity tests. Toxicity data from bioassays involving microphytobentos should be taken into account when environmental risks are calculated.

  2. Scanning fiber angle-resolved low coherence interferometry

    PubMed Central

    Zhu, Yizheng; Terry, Neil G.; Wax, Adam

    2010-01-01

    We present a fiber-optic probe for Fourier-domain angle-resolved low coherence interferometry for the determination of depth-resolved scatterer size. The probe employs a scanning single-mode fiber to collect the angular scattering distribution of the sample, which is analyzed using the Mie theory to obtain the average size of the scatterers. Depth sectioning is achieved with low coherence Mach–Zehnder interferometry. In the sample arm of the interferometer, a fixed fiber illuminates the sample through an imaging lens and a collection fiber samples the backscattered angular distribution by scanning across the Fourier plane image of the sample. We characterize the optical performance of the probe and demonstrate the ability to execute depth-resolved sizing with subwavelength accuracy by using a double-layer phantom containing two sizes of polystyrene microspheres. PMID:19838271

  3. Vessel Sampling and Blood Flow Velocity Distribution With Vessel Diameter for Characterizing the Human Bulbar Conjunctival Microvasculature.

    PubMed

    Wang, Liang; Yuan, Jin; Jiang, Hong; Yan, Wentao; Cintrón-Colón, Hector R; Perez, Victor L; DeBuc, Delia C; Feuer, William J; Wang, Jianhua

    2016-03-01

    This study determined (1) how many vessels (i.e., the vessel sampling) are needed to reliably characterize the bulbar conjunctival microvasculature and (2) if characteristic information can be obtained from the distribution histogram of the blood flow velocity and vessel diameter. Functional slitlamp biomicroscope was used to image hundreds of venules per subject. The bulbar conjunctiva in five healthy human subjects was imaged on six different locations in the temporal bulbar conjunctiva. The histograms of the diameter and velocity were plotted to examine whether the distribution was normal. Standard errors were calculated from the standard deviation and vessel sample size. The ratio of the standard error of the mean over the population mean was used to determine the sample size cutoff. The velocity was plotted as a function of the vessel diameter to display the distribution of the diameter and velocity. The results showed that the sampling size was approximately 15 vessels, which generated a standard error equivalent to 15% of the population mean from the total vessel population. The distributions of the diameter and velocity were not only unimodal, but also somewhat positively skewed and not normal. The blood flow velocity was related to the vessel diameter (r=0.23, P<0.05). This was the first study to determine the sampling size of the vessels and the distribution histogram of the blood flow velocity and vessel diameter, which may lead to a better understanding of the human microvascular system of the bulbar conjunctiva.

  4. Particle size analysis of sediments, soils and related particulate materials for forensic purposes using laser granulometry.

    PubMed

    Pye, Kenneth; Blott, Simon J

    2004-08-11

    Particle size is a fundamental property of any sediment, soil or dust deposit which can provide important clues to nature and provenance. For forensic work, the particle size distribution of sometimes very small samples requires precise determination using a rapid and reliable method with a high resolution. The Coulter trade mark LS230 laser granulometer offers rapid and accurate sizing of particles in the range 0.04-2000 microm for a variety of sample types, including soils, unconsolidated sediments, dusts, powders and other particulate materials. Reliable results are possible for sample weights of just 50 mg. Discrimination between samples is performed on the basis of the shape of the particle size curves and statistical measures of the size distributions. In routine forensic work laser granulometry data can rarely be used in isolation and should be considered in combination with results from other techniques to reach an overall conclusion.

  5. Size and modal analyses of fines and ultrafines from some Apollo 17 samples

    NASA Technical Reports Server (NTRS)

    Greene, G. M.; King, D. T., Jr.; Banholzer, G. S., Jr.; King, E. A.

    1975-01-01

    Scanning electron and optical microscopy techniques have been used to determine the grain-size frequency distributions and morphology-based modal analyses of fine and ultrafine fractions of some Apollo 17 regolith samples. There are significant and large differences between the grain-size frequency distributions of the less than 10-micron size fraction of Apollo 17 samples, but there are no clear relations to the local geologic setting from which individual samples have been collected. This may be due to effective lateral mixing of regolith particles in this size range by micrometeoroid impacts. None of the properties of the frequency distributions support the idea of selective transport of any fine grain-size fraction, as has been proposed by other workers. All of the particle types found in the coarser size fractions also occur in the less than 10-micron particles. In the size range from 105 to 10 microns there is a strong tendency for the percentage of regularly shaped glass to increase as the graphic mean grain size of the less than 1-mm size fraction decreases, both probably being controlled by exposure age.

  6. Assessment of optimum threshold and particle shape parameter for the image analysis of aggregate size distribution of concrete sections

    NASA Astrophysics Data System (ADS)

    Ozen, Murat; Guler, Murat

    2014-02-01

    Aggregate gradation is one of the key design parameters affecting the workability and strength properties of concrete mixtures. Estimating aggregate gradation from hardened concrete samples can offer valuable insights into the quality of mixtures in terms of the degree of segregation and the amount of deviation from the specified gradation limits. In this study, a methodology is introduced to determine the particle size distribution of aggregates from 2D cross sectional images of concrete samples. The samples used in the study were fabricated from six mix designs by varying the aggregate gradation, aggregate source and maximum aggregate size with five replicates of each design combination. Each sample was cut into three pieces using a diamond saw and then scanned to obtain the cross sectional images using a desktop flatbed scanner. An algorithm is proposed to determine the optimum threshold for the image analysis of the cross sections. A procedure was also suggested to determine a suitable particle shape parameter to be used in the analysis of aggregate size distribution within each cross section. Results of analyses indicated that the optimum threshold hence the pixel distribution functions may be different even for the cross sections of an identical concrete sample. Besides, the maximum ferret diameter is the most suitable shape parameter to estimate the size distribution of aggregates when computed based on the diagonal sieve opening. The outcome of this study can be of practical value for the practitioners to evaluate concrete in terms of the degree of segregation and the bounds of mixture's gradation achieved during manufacturing.

  7. Fabrication and Characterization of Surrogate Glasses Aimed to Validate Nuclear Forensic Techniques

    DTIC Science & Technology

    2017-12-01

    sample is processed while submerged and produces fine sized particles the exposure levels and risk of contamination from the samples is also greatly...induced the partial collapses of the xerogel network strengthened the network while the sample sizes were reduced [22], [26]. As a result the wt...inhomogeneous, making it difficult to clearly determine which features were present in the sample before LDHP and which were caused by it. In this study

  8. Power and Precision in Confirmatory Factor Analytic Tests of Measurement Invariance

    ERIC Educational Resources Information Center

    Meade, Adam W.; Bauer, Daniel J.

    2007-01-01

    This study investigates the effects of sample size, factor overdetermination, and communality on the precision of factor loading estimates and the power of the likelihood ratio test of factorial invariance in multigroup confirmatory factor analysis. Although sample sizes are typically thought to be the primary determinant of precision and power,…

  9. ENHANCEMENT OF LEARNING ON SAMPLE SIZE CALCULATION WITH A SMARTPHONE APPLICATION: A CLUSTER-RANDOMIZED CONTROLLED TRIAL.

    PubMed

    Ngamjarus, Chetta; Chongsuvivatwong, Virasakdi; McNeil, Edward; Holling, Heinz

    2017-01-01

    Sample size determination usually is taught based on theory and is difficult to understand. Using a smartphone application to teach sample size calculation ought to be more attractive to students than using lectures only. This study compared levels of understanding of sample size calculations for research studies between participants attending a lecture only versus lecture combined with using a smartphone application to calculate sample sizes, to explore factors affecting level of post-test score after training sample size calculation, and to investigate participants’ attitude toward a sample size application. A cluster-randomized controlled trial involving a number of health institutes in Thailand was carried out from October 2014 to March 2015. A total of 673 professional participants were enrolled and randomly allocated to one of two groups, namely, 341 participants in 10 workshops to control group and 332 participants in 9 workshops to intervention group. Lectures on sample size calculation were given in the control group, while lectures using a smartphone application were supplied to the test group. Participants in the intervention group had better learning of sample size calculation (2.7 points out of maximnum 10 points, 95% CI: 24 - 2.9) than the participants in the control group (1.6 points, 95% CI: 1.4 - 1.8). Participants doing research projects had a higher post-test score than those who did not have a plan to conduct research projects (0.9 point, 95% CI: 0.5 - 1.4). The majority of the participants had a positive attitude towards the use of smartphone application for learning sample size calculation.

  10. Simulation analyses of space use: Home range estimates, variability, and sample size

    USGS Publications Warehouse

    Bekoff, Marc; Mech, L. David

    1984-01-01

    Simulations of space use by animals were run to determine the relationship among home range area estimates, variability, and sample size (number of locations). As sample size increased, home range size increased asymptotically, whereas variability decreased among mean home range area estimates generated by multiple simulations for the same sample size. Our results suggest that field workers should ascertain between 100 and 200 locations in order to estimate reliably home range area. In some cases, this suggested guideline is higher than values found in the few published studies in which the relationship between home range area and number of locations is addressed. Sampling differences for small species occupying relatively small home ranges indicate that fewer locations may be sufficient to allow for a reliable estimate of home range. Intraspecific variability in social status (group member, loner, resident, transient), age, sex, reproductive condition, and food resources also have to be considered, as do season, habitat, and differences in sampling and analytical methods. Comparative data still are needed.

  11. Testing of SIR (a transformable robotic submarine) in Lake Tahoe for future deployment at West Antarctic Ice Sheet grounding lines of Siple Coast

    NASA Astrophysics Data System (ADS)

    Powell, R. D.; Scherer, R. P.; Griffiths, I.; Taylor, L.; Winans, J.; Mankoff, K. D.

    2011-12-01

    A remotely operated vehicle (ROV) has been custom-designed and built by DOER Marine to meet scientific requirements for exploring subglacial water cavities. This sub-ice rover (SIR) will explore and quantitatively document the grounding zone areas of the Ross Ice Shelf cavity using a 3km-long umbilical tether by deployment through an 800m-long ice borehole in a torpedo shape, which is also its default mode if operational failure occurs. Once in the ocean cavity it transforms via a diamond-shaped geometry into a rectangular form when all of its instruments come alive in its flight mode. Instrumentation includes 4 cameras (one forward-looking HD), a vertical scanning sonar (long-range imaging for spatial orientation and navigation), Doppler current meter (determine water current velocities), multi-beam sonar (image and swath map bottom topography), sub-bottom profiler (profile sub-sea-floor sediment for geological history), CTD (determine salinity, temperature and depth), DO meter (determine dissolved oxygen content in water), transmissometer (determine suspended particulate concentrations in water), laser particle-size analyzer (determine sizes of particles in water), triple laser-beams (determine size and volume of objects), thermistor probe (measure in situ temperatures of ice and sediment), shear vane probe (determine in situ strength of sediment), manipulator arm (deploy instrumentation packages, collect samples), shallow ice corer (collect ice samples and glacial debris), water sampler (determine sea water/freshwater composition, calibrate real-time sensors, sample microbes), shallow sediment corer (sample sea floor, in-ice and subglacial sediment for stratigraphy, facies, particle size, composition, structure, fabric, microbes). A sophisticated array of data handling, storing and displaying will allow real-time observations and environmental assessments to be made. This robotic submarine and other instruments will be tested in Lake Tahoe in September, 2011 and results will be presented on its trials and geological and biological findings down to the deepest depths of the lake. Other instruments include a 5m-ling percussion corer for sampling deeper sediments, an ice-tethered profiler with CTD and ACDP, and in situ oceanographic mooring designed to fit down a narrow (30cm-diameter) ice borehole that include interchangeable packages of ACDPs, CTDs, transmissometers, laser particle-size analyzer, DO meter, automated multi-port water sampler, water column nutrient analyzer, sediment porewater chemistry analyzer, down-looking color camera (see figure), and altimeter.

  12. MUDMASTER: A Program for Calculating Crystalline Size Distributions and Strain from the Shapes of X-Ray Diffraction Peaks

    USGS Publications Warehouse

    Eberl, D.D.; Drits, V.A.; Środoń, Jan; Nüesch, R.

    1996-01-01

    Particle size may strongly influence the physical and chemical properties of a substance (e.g. its rheology, surface area, cation exchange capacity, solubility, etc.), and its measurement in rocks may yield geological information about ancient environments (sediment provenance, degree of metamorphism, degree of weathering, current directions, distance to shore, etc.). Therefore mineralogists, geologists, chemists, soil scientists, and others who deal with clay-size material would like to have a convenient method for measuring particle size distributions. Nano-size crystals generally are too fine to be measured by light microscopy. Laser scattering methods give only average particle sizes; therefore particle size can not be measured in a particular crystallographic direction. Also, the particles measured by laser techniques may be composed of several different minerals, and may be agglomerations of individual crystals. Measurement by electron and atomic force microscopy is tedious, expensive, and time consuming. It is difficult to measure more than a few hundred particles per sample by these methods. This many measurements, often taking several days of intensive effort, may yield an accurate mean size for a sample, but may be too few to determine an accurate distribution of sizes. Measurement of size distributions by X-ray diffraction (XRD) solves these shortcomings. An X-ray scan of a sample occurs automatically, taking a few minutes to a few hours. The resulting XRD peaks average diffraction effects from billions of individual nano-size crystals. The size that is measured by XRD may be related to the size of the individual crystals of the mineral in the sample, rather than to the size of particles formed from the agglomeration of these crystals. Therefore one can determine the size of a particular mineral in a mixture of minerals, and the sizes in a particular crystallographic direction of that mineral.

  13. Non-destructive crystal size determination in geological samples of archaeological use by means of infrared spectroscopy.

    PubMed

    Olivares, M; Larrañaga, A; Irazola, M; Sarmiento, A; Murelaga, X; Etxebarria, N

    2012-08-30

    The determination of crystal size of chert samples can provide suitable information about the raw material used for the manufacture of archeological items. X-ray diffraction (XRD) has been widely used for this purpose in several scientific areas. However, the historical value of archeological pieces makes this procedure sometimes unfeasible and thus, non-invasive new analytical approaches are required. In this sense, a new method was developed relating the crystal size obtained by means of XRD and infrared spectroscopy (IR) using partial least squares regression. The IR spectra collected from a large amount of different geological chert samples of archeological use were pre-processed following different treatments (i.e., derivatization or sample-wise normalization) to obtain the best regression model. The full cross-validation was satisfactorily validated using real samples and the experimental root mean standard error of precision value was 165 Å whereas the average precision of the estimated size value was 3%. The features of infrared bands were also evaluated in order to know the background of the prediction ability. In the studied case, the variance in the model was associated to the differences in the characteristic stretching and bending infrared bands of SiO(2). Based on this fact, it would be feasible to estimate the crystal size if it is built beforehand a chemometric model relating the size measured by standard methods and the IR spectra. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Determination of the size, concentration, and refractive index of silica nanoparticles from turbidity spectra.

    PubMed

    Khlebtsov, Boris N; Khanadeev, Vitaly A; Khlebtsov, Nikolai G

    2008-08-19

    The size and concentration of silica cores determine the size and concentration of silica/gold nanoshells in final preparations. Until now, the concentration of silica/gold nanoshells with Stober's silica core has been evaluated through the material balance assumption. Here, we describe a method for simultaneous determination of the average size and concentration of silica nanospheres from turbidity spectra measured within the 400-600 nm spectral band. As the refractive index of silica nanoparticles is the key input parameter for optical determination of their concentration, we propose an optical method and provide experimental data on a direct determination of the refractive index of silica particles n = 1.475 +/- 0.005. Finally, we exemplify our method by determining the particle size and concentration for 10 samples and compare the results with transmission electron microscopy (TEM), atomic force microscopy (AFM), and dynamic light scattering data.

  15. Effect of roll hot press temperature on crystallite size of PVDF film

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartono, Ambran, E-mail: ambranhartono@yahoo.com; Sanjaya, Edi; Djamal, Mitra

    2014-03-24

    Fabrication PVDF films have been made using Hot Roll Press. Preparation of samples carried out for nine different temperatures. This condition is carried out to see the effect of Roll Hot Press temperature on the size of the crystallite of PVDF films. To obtain the diffraction pattern of sample characterization is performed using X-Ray Diffraction. Furthermore, from the diffraction pattern is obtained, the calculation to determine the crystallite size of the sample by using the Scherrer equation. From the experimental results and the calculation of crystallite sizes obtained for the samples with temperature 130 °C up to 170 °C respectivelymore » increased from 7.2 nm up to 20.54 nm. These results show that increasing temperatures will also increase the size of the crystallite of the sample. This happens because with the increasing temperature causes the higher the degree of crystallization of PVDF film sample is formed, so that the crystallite size also increases. This condition indicates that the specific volume or size of the crystals depends on the magnitude of the temperature as it has been studied by Nakagawa.« less

  16. Heavy metals relationship with water and size-fractionated sediments in rivers using canonical correlation analysis (CCA) case study, rivers of south western Caspian Sea.

    PubMed

    Vosoogh, Ali; Saeedi, Mohsen; Lak, Raziyeh

    2016-11-01

    Some pollutants can qualitatively affect aquatic freshwater such as rivers, and heavy metals are one of the most important pollutants in aquatic fresh waters. Heavy metals can be found in the form of components dissolved in these waters or in compounds with suspended particles and surface sediments. It can be said that heavy metals are in equilibrium between water and sediment. In this study, the amount of heavy metals is determined in water and different sizes of sediment. To obtain the relationship between heavy metals in water and size-fractionated sediments, a canonical correlation analysis (CCA) was utilized in rivers of the southwestern Caspian Sea. In this research, a case study was carried out on 18 sampling stations in nine rivers. In the first step, the concentrations of heavy metals (Cu, Zn, Cr, Fe, Mn, Pb, Ni, and Cd) were determined in water and size-fractionated sediment samples. Water sampling sites were classified by hierarchical cluster analysis (HCA) utilizing squared Euclidean distance with Ward's method. In addition, for interpreting the obtained results and the relationships between the concentration of heavy metals in the tested river water and sample sediments, canonical correlation analysis (CCA) was utilized. The rivers were grouped into two classes (those having no pollution and those having low pollution) based on the HCA results obtained for river water samples. CCA results found numerous relationships between rivers in Iran's Guilan province and their size-fractionated sediments samples. The heavy metals of sediments with 0.038 to 0.125 mm size in diameter are slightly correlated with those of water samples.

  17. Sample size and power calculations for detecting changes in malaria transmission using antibody seroconversion rate.

    PubMed

    Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris

    2015-12-30

    Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.

  18. Point of data saturation was assessed using resampling methods in a survey with open-ended questions.

    PubMed

    Tran, Viet-Thi; Porcher, Raphael; Falissard, Bruno; Ravaud, Philippe

    2016-12-01

    To describe methods to determine sample sizes in surveys using open-ended questions and to assess how resampling methods can be used to determine data saturation in these surveys. We searched the literature for surveys with open-ended questions and assessed the methods used to determine sample size in 100 studies selected at random. Then, we used Monte Carlo simulations on data from a previous study on the burden of treatment to assess the probability of identifying new themes as a function of the number of patients recruited. In the literature, 85% of researchers used a convenience sample, with a median size of 167 participants (interquartile range [IQR] = 69-406). In our simulation study, the probability of identifying at least one new theme for the next included subject was 32%, 24%, and 12% after the inclusion of 30, 50, and 100 subjects, respectively. The inclusion of 150 participants at random resulted in the identification of 92% themes (IQR = 91-93%) identified in the original study. In our study, data saturation was most certainly reached for samples >150 participants. Our method may be used to determine when to continue the study to find new themes or stop because of futility. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. 40 CFR Appendix J to Part 50 - Reference Method for the Determination of Particulate Matter as PM10 in the Atmosphere

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... size fraction in the PM1O size range is then collected on a separate filter over the specified sampling... Each filter is weighed (after moisture equilibration) before and after use to determine the net weight... of the mass concentration range is determined by the repeatability of filter tare weights, assuming...

  20. 40 CFR Appendix J to Part 50 - Reference Method for the Determination of Particulate Matter as PM10 in the Atmosphere

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... size fraction in the PM1O size range is then collected on a separate filter over the specified sampling... Each filter is weighed (after moisture equilibration) before and after use to determine the net weight... of the mass concentration range is determined by the repeatability of filter tare weights, assuming...

  1. 40 CFR Appendix J to Part 50 - Reference Method for the Determination of Particulate Matter as PM10 in the Atmosphere

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... size fraction in the PM1O size range is then collected on a separate filter over the specified sampling... Each filter is weighed (after moisture equilibration) before and after use to determine the net weight... of the mass concentration range is determined by the repeatability of filter tare weights, assuming...

  2. The effective elastic properties of human trabecular bone may be approximated using micro-finite element analyses of embedded volume elements.

    PubMed

    Daszkiewicz, Karol; Maquer, Ghislain; Zysset, Philippe K

    2017-06-01

    Boundary conditions (BCs) and sample size affect the measured elastic properties of cancellous bone. Samples too small to be representative appear stiffer under kinematic uniform BCs (KUBCs) than under periodicity-compatible mixed uniform BCs (PMUBCs). To avoid those effects, we propose to determine the effective properties of trabecular bone using an embedded configuration. Cubic samples of various sizes (2.63, 5.29, 7.96, 10.58 and 15.87 mm) were cropped from [Formula: see text] scans of femoral heads and vertebral bodies. They were converted into [Formula: see text] models and their stiffness tensor was established via six uniaxial and shear load cases. PMUBCs- and KUBCs-based tensors were determined for each sample. "In situ" stiffness tensors were also evaluated for the embedded configuration, i.e. when the loads were transmitted to the samples via a layer of trabecular bone. The Zysset-Curnier model accounting for bone volume fraction and fabric anisotropy was fitted to those stiffness tensors, and model parameters [Formula: see text] (Poisson's ratio) [Formula: see text] and [Formula: see text] (elastic and shear moduli) were compared between sizes. BCs and sample size had little impact on [Formula: see text]. However, KUBCs- and PMUBCs-based [Formula: see text] and [Formula: see text], respectively, decreased and increased with growing size, though convergence was not reached even for our largest samples. Both BCs produced upper and lower bounds for the in situ values that were almost constant across samples dimensions, thus appearing as an approximation of the effective properties. PMUBCs seem also appropriate for mimicking the trabecular core, but they still underestimate its elastic properties (especially in shear) even for nearly orthotropic samples.

  3. A New Sample Size Formula for Regression.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.

    The focus of this research was to determine the efficacy of a new method of selecting sample sizes for multiple linear regression. A Monte Carlo simulation was used to study both empirical predictive power rates and empirical statistical power rates of the new method and seven other methods: those of C. N. Park and A. L. Dudycha (1974); J. Cohen…

  4. Sample preparation techniques for the determination of trace residues and contaminants in foods.

    PubMed

    Ridgway, Kathy; Lalljie, Sam P D; Smith, Roger M

    2007-06-15

    The determination of trace residues and contaminants in complex matrices, such as food, often requires extensive sample extraction and preparation prior to instrumental analysis. Sample preparation is often the bottleneck in analysis and there is a need to minimise the number of steps to reduce both time and sources of error. There is also a move towards more environmentally friendly techniques, which use less solvent and smaller sample sizes. Smaller sample size becomes important when dealing with real life problems, such as consumer complaints and alleged chemical contamination. Optimal sample preparation can reduce analysis time, sources of error, enhance sensitivity and enable unequivocal identification, confirmation and quantification. This review considers all aspects of sample preparation, covering general extraction techniques, such as Soxhlet and pressurised liquid extraction, microextraction techniques such as liquid phase microextraction (LPME) and more selective techniques, such as solid phase extraction (SPE), solid phase microextraction (SPME) and stir bar sorptive extraction (SBSE). The applicability of each technique in food analysis, particularly for the determination of trace organic contaminants in foods is discussed.

  5. High-resolution, submicron particle size distribution analysis using gravitational-sweep sedimentation.

    PubMed Central

    Mächtle, W

    1999-01-01

    Sedimentation velocity is a powerful tool for the analysis of complex solutions of macromolecules. However, sample turbidity imposes an upper limit to the size of molecular complexes currently amenable to such analysis. Furthermore, the breadth of the particle size distribution, combined with possible variations in the density of different particles, makes it difficult to analyze extremely complex mixtures. These same problems are faced in the polymer industry, where dispersions of latices, pigments, lacquers, and emulsions must be characterized. There is a rich history of methods developed for the polymer industry finding use in the biochemical sciences. Two such methods are presented. These use analytical ultracentrifugation to determine the density and size distributions for submicron-sized particles. Both methods rely on Stokes' equations to estimate particle size and density, whereas turbidity, corrected using Mie's theory, provides the concentration measurement. The first method uses the sedimentation time in dispersion media of different densities to evaluate the particle density and size distribution. This method works provided the sample is chemically homogeneous. The second method splices together data gathered at different sample concentrations, thus permitting the high-resolution determination of the size distribution of particle diameters ranging from 10 to 3000 nm. By increasing the rotor speed exponentially from 0 to 40,000 rpm over a 1-h period, size distributions may be measured for extremely broadly distributed dispersions. Presented here is a short history of particle size distribution analysis using the ultracentrifuge, along with a description of the newest experimental methods. Several applications of the methods are provided that demonstrate the breadth of its utility, including extensions to samples containing nonspherical and chromophoric particles. PMID:9916040

  6. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    PubMed

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  7. Differentiating gold nanorod samples using particle size and shape distributions from transmission electron microscope images

    NASA Astrophysics Data System (ADS)

    Grulke, Eric A.; Wu, Xiaochun; Ji, Yinglu; Buhr, Egbert; Yamamoto, Kazuhiro; Song, Nam Woong; Stefaniak, Aleksandr B.; Schwegler-Berry, Diane; Burchett, Woodrow W.; Lambert, Joshua; Stromberg, Arnold J.

    2018-04-01

    Size and shape distributions of gold nanorod samples are critical to their physico-chemical properties, especially their longitudinal surface plasmon resonance. This interlaboratory comparison study developed methods for measuring and evaluating size and shape distributions for gold nanorod samples using transmission electron microscopy (TEM) images. The objective was to determine whether two different samples, which had different performance attributes in their application, were different with respect to their size and/or shape descriptor distributions. Touching particles in the captured images were identified using a ruggedness shape descriptor. Nanorods could be distinguished from nanocubes using an elongational shape descriptor. A non-parametric statistical test showed that cumulative distributions of an elongational shape descriptor, that is, the aspect ratio, were statistically different between the two samples for all laboratories. While the scale parameters of size and shape distributions were similar for both samples, the width parameters of size and shape distributions were statistically different. This protocol fulfills an important need for a standardized approach to measure gold nanorod size and shape distributions for applications in which quantitative measurements and comparisons are important. Furthermore, the validated protocol workflow can be automated, thus providing consistent and rapid measurements of nanorod size and shape distributions for researchers, regulatory agencies, and industry.

  8. An empirical analysis of the quantitative effect of data when fitting quadratic and cubic polynomials

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1974-01-01

    A study is made of the extent to which the size of the sample affects the accuracy of a quadratic or a cubic polynomial approximation of an experimentally observed quantity, and the trend with regard to improvement in the accuracy of the approximation as a function of sample size is established. The task is made possible through a simulated analysis carried out by the Monte Carlo method in which data are simulated by using several transcendental or algebraic functions as models. Contaminated data of varying amounts are fitted to either quadratic or cubic polynomials, and the behavior of the mean-squared error of the residual variance is determined as a function of sample size. Results indicate that the effect of the size of the sample is significant only for relatively small sizes and diminishes drastically for moderate and large amounts of experimental data.

  9. Bivariate mass-size relation as a function of morphology as determined by Galaxy Zoo 2 crowdsourced visual classifications

    NASA Astrophysics Data System (ADS)

    Beck, Melanie; Scarlata, Claudia; Fortson, Lucy; Willett, Kyle; Galloway, Melanie

    2016-01-01

    It is well known that the mass-size distribution evolves as a function of cosmic time and that this evolution is different between passive and star-forming galaxy populations. However, the devil is in the details and the precise evolution is still a matter of debate since this requires careful comparison between similar galaxy populations over cosmic time while simultaneously taking into account changes in image resolution, rest-frame wavelength, and surface brightness dimming in addition to properly selecting representative morphological samples.Here we present the first step in an ambitious undertaking to calculate the bivariate mass-size distribution as a function of time and morphology. We begin with a large sample (~3 x 105) of SDSS galaxies at z ~ 0.1. Morphologies for this sample have been determined by Galaxy Zoo crowdsourced visual classifications and we split the sample not only by disk- and bulge-dominated galaxies but also in finer morphology bins such as bulge strength. Bivariate distribution functions are the only way to properly account for biases and selection effects. In particular, we quantify the mass-size distribution with a version of the parametric Maximum Likelihood estimator which has been modified to account for measurement errors as well as upper limits on galaxy sizes.

  10. A field instrument for quantitative determination of beryllium by activation analysis

    USGS Publications Warehouse

    Vaughn, William W.; Wilson, E.E.; Ohm, J.M.

    1960-01-01

    A low-cost instrument has been developed for quantitative determinations of beryllium in the field by activation analysis. The instrument makes use of the gamma-neutron reaction between gammas emitted by an artificially radioactive source (Sb124) and beryllium as it occurs in nature. The instrument and power source are mounted in a panel-type vehicle. Samples are prepared by hand-crushing the rock to approximately ?-inch mesh size and smaller. Sample volumes are kept constant by means of a standard measuring cup. Instrument calibration, made by using standards of known BeO content, indicates the analyses are reproducible and accurate to within ? 0.25 percent BeO in the range from 1 to 20 percent BeO with a sample counting time of 5 minutes. Sensitivity of the instrument maybe increased somewhat by increasing the source size, the sample size, or by enlarging the cross-sectional area of the neutron-sensitive phosphor normal to the neutron flux.

  11. Landsat image and sample design for water reservoirs (Rapel dam Central Chile).

    PubMed

    Lavanderos, L; Pozo, M E; Pattillo, C; Miranda, H

    1990-01-01

    Spatial heterogeneity of the Rapel reservoir surface waters is analyzed through Landsat images. The image digital counts are used with the aim or developing an aprioristic quantitative sample design.Natural horizontal stratification of the Rapel Reservoir (Central Chile) is produced mainly by suspended solids. The spatial heterogeneity conditions of the reservoir for the Spring 86-Summer 87 period were determined by qualitative analysis and image processing of the MSS Landsat, bands 1 and 3. The space-time variations of the different observed strata obtained with multitemporal image analysis.A random stratified sample design (r.s.s.d) was developed, based on the digital counts statistical analysis. Strata population size as well as the average, variance and sampling size of the digital counts were obtained by the r.s.s.d method.Stratification determined by analysis of satellite images were later correlated with ground data. Though the stratification of the reservoir is constant over time, the shape and size of the strata varys.

  12. Approximate sample sizes required to estimate length distributions

    USGS Publications Warehouse

    Miranda, L.E.

    2007-01-01

    The sample sizes required to estimate fish length were determined by bootstrapping from reference length distributions. Depending on population characteristics and species-specific maximum lengths, 1-cm length-frequency histograms required 375-1,200 fish to estimate within 10% with 80% confidence, 2.5-cm histograms required 150-425 fish, proportional stock density required 75-140 fish, and mean length required 75-160 fish. In general, smaller species, smaller populations, populations with higher mortality, and simpler length statistics required fewer samples. Indices that require low sample sizes may be suitable for monitoring population status, and when large changes in length are evident, additional sampling effort may be allocated to more precisely define length status with more informative estimators. ?? Copyright by the American Fisheries Society 2007.

  13. Ranked set sampling: cost and optimal set size.

    PubMed

    Nahhas, Ramzi W; Wolfe, Douglas A; Chen, Haiying

    2002-12-01

    McIntyre (1952, Australian Journal of Agricultural Research 3, 385-390) introduced ranked set sampling (RSS) as a method for improving estimation of a population mean in settings where sampling and ranking of units from the population are inexpensive when compared with actual measurement of the units. Two of the major factors in the usefulness of RSS are the set size and the relative costs of the various operations of sampling, ranking, and measurement. In this article, we consider ranking error models and cost models that enable us to assess the effect of different cost structures on the optimal set size for RSS. For reasonable cost structures, we find that the optimal RSS set sizes are generally larger than had been anticipated previously. These results will provide a useful tool for determining whether RSS is likely to lead to an improvement over simple random sampling in a given setting and, if so, what RSS set size is best to use in this case.

  14. A general approach for sample size calculation for the three-arm 'gold standard' non-inferiority design.

    PubMed

    Stucke, Kathrin; Kieser, Meinhard

    2012-12-10

    In the three-arm 'gold standard' non-inferiority design, an experimental treatment, an active reference, and a placebo are compared. This design is becoming increasingly popular, and it is, whenever feasible, recommended for use by regulatory guidelines. We provide a general method to calculate the required sample size for clinical trials performed in this design. As special cases, the situations of continuous, binary, and Poisson distributed outcomes are explored. Taking into account the correlation structure of the involved test statistics, the proposed approach leads to considerable savings in sample size as compared with application of ad hoc methods for all three scale levels. Furthermore, optimal sample size allocation ratios are determined that result in markedly smaller total sample sizes as compared with equal assignment. As optimal allocation makes the active treatment groups larger than the placebo group, implementation of the proposed approach is also desirable from an ethical viewpoint. Copyright © 2012 John Wiley & Sons, Ltd.

  15. Methods for sample size determination in cluster randomized trials

    PubMed Central

    Rutterford, Clare; Copas, Andrew; Eldridge, Sandra

    2015-01-01

    Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515

  16. Optimal design in pediatric pharmacokinetic and pharmacodynamic clinical studies.

    PubMed

    Roberts, Jessica K; Stockmann, Chris; Balch, Alfred; Yu, Tian; Ward, Robert M; Spigarelli, Michael G; Sherwin, Catherine M T

    2015-03-01

    It is not trivial to conduct clinical trials with pediatric participants. Ethical, logistical, and financial considerations add to the complexity of pediatric studies. Optimal design theory allows investigators the opportunity to apply mathematical optimization algorithms to define how to structure their data collection to answer focused research questions. These techniques can be used to determine an optimal sample size, optimal sample times, and the number of samples required for pharmacokinetic and pharmacodynamic studies. The aim of this review is to demonstrate how to determine optimal sample size, optimal sample times, and the number of samples required from each patient by presenting specific examples using optimal design tools. Additionally, this review aims to discuss the relative usefulness of sparse vs rich data. This review is intended to educate the clinician, as well as the basic research scientist, whom plan on conducting a pharmacokinetic/pharmacodynamic clinical trial in pediatric patients. © 2015 John Wiley & Sons Ltd.

  17. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    PubMed

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.

  18. Mesh-size effects on drift sample composition as determined with a triple net sampler

    USGS Publications Warehouse

    Slack, K.V.; Tilley, L.J.; Kennelly, S.S.

    1991-01-01

    Nested nets of three different mesh apertures were used to study mesh-size effects on drift collected in a small mountain stream. The innermost, middle, and outermost nets had, respectively, 425 ??m, 209 ??m and 106 ??m openings, a design that reduced clogging while partitioning collections into three size groups. The open area of mesh in each net, from largest to smallest mesh opening, was 3.7, 5.7 and 8.0 times the area of the net mouth. Volumes of filtered water were determined with a flowmeter. The results are expressed as (1) drift retained by each net, (2) drift that would have been collected by a single net of given mesh size, and (3) the percentage of total drift (the sum of the catches from all three nets) that passed through the 425 ??m and 209 ??m nets. During a two day period in August 1986, Chironomidae larvae were dominant numerically in all 209 ??m and 106 ??m samples and midday 425 ??m samples. Large drifters (Ephemerellidae) occurred only in 425 ??m or 209 ??m nets, but the general pattern was an increase in abundance and number of taxa with decreasing mesh size. Relatively more individuals occurred in the larger mesh nets at night than during the day. The two larger mesh sizes retained 70% of the total sediment/detritus in the drift collections, and this decreased the rate of clogging of the 106 ??m net. If an objective of a sampling program is to compare drift density or drift rate between areas or sampling dates, the same mesh size should be used for all sample collection and processing. The mesh aperture used for drift collection should retain all species and life stages of significance in a study. The nested net design enables an investigator to test the adequacy of drift samples. ?? 1991 Kluwer Academic Publishers.

  19. Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.

    PubMed

    Wang, Zuozhen

    2018-01-01

    Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.

  20. Effects of sample size on estimates of population growth rates calculated with matrix models.

    PubMed

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  1. Got Power? A Systematic Review of Sample Size Adequacy in Health Professions Education Research

    ERIC Educational Resources Information Center

    Cook, David A.; Hatala, Rose

    2015-01-01

    Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011,…

  2. Effects of plot size on forest-type algorithm accuracy

    Treesearch

    James A. Westfall

    2009-01-01

    The Forest Inventory and Analysis (FIA) program utilizes an algorithm to consistently determine the forest type for forested conditions on sample plots. Forest type is determined from tree size and species information. Thus, the accuracy of results is often dependent on the number of trees present, which is highly correlated with plot area. This research examines the...

  3. Sample size calculations for cluster randomised crossover trials in Australian and New Zealand intensive care research.

    PubMed

    Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B

    2018-06-01

    The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.

  4. Inference and sample size calculation for clinical trials with incomplete observations of paired binary outcomes.

    PubMed

    Zhang, Song; Cao, Jing; Ahn, Chul

    2017-02-20

    We investigate the estimation of intervention effect and sample size determination for experiments where subjects are supposed to contribute paired binary outcomes with some incomplete observations. We propose a hybrid estimator to appropriately account for the mixed nature of observed data: paired outcomes from those who contribute complete pairs of observations and unpaired outcomes from those who contribute either pre-intervention or post-intervention outcomes. We theoretically prove that if incomplete data are evenly distributed between the pre-intervention and post-intervention periods, the proposed estimator will always be more efficient than the traditional estimator. A numerical research shows that when the distribution of incomplete data is unbalanced, the proposed estimator will be superior when there is moderate-to-strong positive within-subject correlation. We further derive a closed-form sample size formula to help researchers determine how many subjects need to be enrolled in such studies. Simulation results suggest that the calculated sample size maintains the empirical power and type I error under various design configurations. We demonstrate the proposed method using a real application example. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Sample Size Methods for Estimating HIV Incidence from Cross-Sectional Surveys

    PubMed Central

    Brookmeyer, Ron

    2015-01-01

    Summary Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this paper we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this paper at the Biometrics website on Wiley Online Library. PMID:26302040

  6. Sample size methods for estimating HIV incidence from cross-sectional surveys.

    PubMed

    Konikoff, Jacob; Brookmeyer, Ron

    2015-12-01

    Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this article, we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this article at the Biometrics website on Wiley Online Library. © 2015, The International Biometric Society.

  7. Sample size considerations for clinical research studies in nuclear cardiology.

    PubMed

    Chiuzan, Cody; West, Erin A; Duong, Jimmy; Cheung, Ken Y K; Einstein, Andrew J

    2015-12-01

    Sample size calculation is an important element of research design that investigators need to consider in the planning stage of the study. Funding agencies and research review panels request a power analysis, for example, to determine the minimum number of subjects needed for an experiment to be informative. Calculating the right sample size is crucial to gaining accurate information and ensures that research resources are used efficiently and ethically. The simple question "How many subjects do I need?" does not always have a simple answer. Before calculating the sample size requirements, a researcher must address several aspects, such as purpose of the research (descriptive or comparative), type of samples (one or more groups), and data being collected (continuous or categorical). In this article, we describe some of the most frequent methods for calculating the sample size with examples from nuclear cardiology research, including for t tests, analysis of variance (ANOVA), non-parametric tests, correlation, Chi-squared tests, and survival analysis. For the ease of implementation, several examples are also illustrated via user-friendly free statistical software.

  8. Synthesis and characterization of nanocrystalline mesoporous zirconia using supercritical drying.

    PubMed

    Tyagi, Beena; Sidhpuria, Kalpesh; Shaik, Basha; Jasra, Raksh Vir

    2006-06-01

    Synthesis of nano-crystalline zirconia aerogel was done by sol-gel technique and supercritical drying using n-propanol solvent at and above supercritical temperature (235-280 degrees C) and pressure (48-52 bar) of n-propanol. Zirconia xerogel samples have also been prepared by conventional thermal drying method to compare with the super critically dried samples. Crystalline phase, crystallite size, surface area, pore volume, and pore size distribution were determined for all the samples in detail to understand the effect of gel drying methods on these properties. Supercritical drying of zirconia gel was observed to give thermally stable, nano-crystalline, tetragonal zirconia aerogels having high specific surface area and porosity with narrow and uniform pore size distribution as compared to thermally dried zirconia. With supercritical drying, zirconia samples show the formation of only mesopores whereas in thermally dried samples, substantial amount of micropores are observed along with mesopores. The samples prepared using supercritical drying yield nano-crystalline zirconia with smaller crystallite size (4-6 nm) as compared to higher crystallite size (13-20 nm) observed with thermally dried zirconia.

  9. Variability of Phytoplankton Size Structure in Response to Changes in Coastal Upwelling Intensity in the Southwestern East Sea

    NASA Astrophysics Data System (ADS)

    Shin, Jung-Wook; Park, Jinku; Choi, Jang-Geun; Jo, Young-Heon; Kang, Jae Joong; Joo, HuiTae; Lee, Sang Heon

    2017-12-01

    The aim of this study was to examine the size structure of phytoplankton under varying coastal upwelling intensities and to determine the resulting primary productivity in the southwestern East Sea. Samples of phytoplankton assemblages were collected on five occasions from the Hupo Bank, off the east coast of Korea, during 2012-2013. Because two major surface currents have a large effect on water mass transport in this region, we first performed a Backward Particle Tracking Experiment (BPTE) to determine the coastal sea from which the collected samples originated according to advection time of BPTE particles, following which we used upwelling age (UA) to determine the intensity of coastal upwelling in the region of origin for each sample. Only samples that were affected by coastal upwelling in the region of origin were included in subsequent analyses. We found that as UA increased, there was a decreasing trend in the concentration of picophytoplankton, and increasing trends in the concentration of nanophytoplankton and microphytoplankton. We also examined the relationship between the size structure of phytoplankton and primary productivity in the Ulleung Basin (UB), which has experienced significant variation over the past decade. We found that primary productivity in UB was closely related to the strength of the southerly wind, which is the most important mechanism for coastal upwelling in the southwestern East Sea. Thus, the size structure of phytoplankton is determined by the intensity of coastal upwelling, which is regulated by the southerly wind, and makes an important contribution to primary productivity.

  10. The large sample size fallacy.

    PubMed

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  11. A Compton scattering technique to determine wood density and locating defects in it

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tondon, Akash, E-mail: akashtondonnsl@gmail.com; Sandhu, B. S.; Singh, Bhajan

    A Compton scattering technique is presented to determine density and void location in the given wooden samples. The technique uses a well collimated gamma ray beam from {sup 137}Cs along with the NaI(Tl) scintillation detector. First, a linear relationship is established between Compton scattered intensity and known density of chemical compounds, and then density of the wood is determined from this linear relation. In another experiment, the ability of penetration of gamma rays is explored to detect voids in wooden (low Z) sample. The sudden reduction in the Compton scattered intensities agrees well with the position and size of voidsmore » in the wooden sample. It is concluded that wood density and the voids of size ∼ 4 mm and more can be detected easily by this method.« less

  12. What big size you have! Using effect sizes to determine the impact of public health nursing interventions.

    PubMed

    Johnson, K E; McMorris, B J; Raynor, L A; Monsen, K A

    2013-01-01

    The Omaha System is a standardized interface terminology that is used extensively by public health nurses in community settings to document interventions and client outcomes. Researchers using Omaha System data to analyze the effectiveness of interventions have typically calculated p-values to determine whether significant client changes occurred between admission and discharge. However, p-values are highly dependent on sample size, making it difficult to distinguish statistically significant changes from clinically meaningful changes. Effect sizes can help identify practical differences but have not yet been applied to Omaha System data. We compared p-values and effect sizes (Cohen's d) for mean differences between admission and discharge for 13 client problems documented in the electronic health records of 1,016 young low-income parents. Client problems were documented anywhere from 6 (Health Care Supervision) to 906 (Caretaking/parenting) times. On a scale from 1 to 5, the mean change needed to yield a large effect size (Cohen's d ≥ 0.80) was approximately 0.60 (range = 0.50 - 1.03) regardless of p-value or sample size (i.e., the number of times a client problem was documented in the electronic health record). Researchers using the Omaha System should report effect sizes to help readers determine which differences are practical and meaningful. Such disclosures will allow for increased recognition of effective interventions.

  13. [Ultra-Fine Pressed Powder Pellet Sample Preparation XRF Determination of Multi-Elements and Carbon Dioxide in Carbonate].

    PubMed

    Li, Xiao-li; An, Shu-qing; Xu, Tie-min; Liu, Yi-bo; Zhang, Li-juan; Zeng, Jiang-ping; Wang, Na

    2015-06-01

    The main analysis error of pressed powder pellet of carbonate comes from particle-size effect and mineral effect. So in the article in order to eliminate the particle-size effect, the ultrafine pressed powder pellet sample preparation is used to the determination of multi-elements and carbon-dioxide in carbonate. To prepare the ultrafine powder the FRITSCH planetary Micro Mill machine and tungsten carbide media is utilized. To conquer the conglomeration during the process of grinding, the wet grinding is preferred. The surface morphology of the pellet is more smooth and neat, the Compton scatter effect is reduced with the decrease in particle size. The intensity of the spectral line is varied with the change of the particle size, generally the intensity of the spectral line is increased with the decrease in the particle size. But when the particle size of more than one component of the material is decreased, the intensity of the spectral line may increase for S, Si, Mg, or decrease for Ca, Al, Ti, K, which depend on the respective mass absorption coefficient . The change of the composition of the phase with milling is also researched. The incident depth of respective element is given from theoretical calculation. When the sample is grounded to the particle size of less than the penetration depth of all the analyte, the effect of the particle size on the intensity of the spectral line is much reduced. In the experiment, when grounded the sample to less than 8 μm(d95), the particle-size effect is much eliminated, with the correction method of theoretical α coefficient and the empirical coefficient, 14 major, minor and trace element in the carbonate can be determined accurately. And the precision of the method is much improved with RSD < 2%, except Na2O. Carbon is ultra-light element, the fluorescence yield is low and the interference is serious. With the manual multi-layer crystal PX4, coarse collimator, empirical correction, X-ray spectrometer can be used to determine the carbon dioxide in the carbonate quantitatively. The intensity of the carbon is increase with the times of the measurement and the time delay even the pellet is stored in the dessicator. So employing the latest pressed powder pellet is suggested.

  14. Sediment quantity and quality in three impoundments in Massachusetts

    USGS Publications Warehouse

    Zimmerman, Marc James; Breault, Robert F.

    2003-01-01

    As part of a study with an overriding goal of providing information that would assist State and Federal agencies in developing screening protocols for managing sediments impounded behind dams that are potential candidates for removal, the U.S Geological Survey determined sediment quantity and quality at three locations: one on the French River and two on Yokum Brook, a tributary to the west branch of the Westfield River. Data collected with a global positioning system, a geographic information system, and sediment-thickness data aided in the creation of sediment maps and the calculation of sediment volumes at Perryville Pond on the French River in Webster, Massachusetts, and at the Silk Mill and Ballou Dams on Yokum Brook in Becket, Massachusetts. From these data the following sediment volumes were determined: Perryville Pond, 71,000 cubic yards, Silk Mill, 1,600 cubic yards, and Ballou, 800 cubic yards. Sediment characteristics were assessed in terms of grain size and concentrations of potentially hazardous organic compounds and metals. Assessment of the approaches and methods used at study sites indicated that ground-penetrating radar produced data that were extremely difficult and time-consuming to interpret for the three study sites. Because of these difficulties, a steel probe was ultimately used to determine sediment depth and extent for inclusion in the sediment maps. Use of these methods showed that, where sampling sites were accessible, a machine-driven coring device would be preferable to the physically exhausting, manual sediment-coring methods used in this investigation. Enzyme-linked immunosorbent assays were an effective tool for screening large numbers of samples for a range of organic contaminant compounds. An example calculation of the number of samples needed to characterize mean concentrations of contaminants indicated that the number of samples collected for most analytes was adequate; however, additional analyses for lead, copper, silver, arsenic, total petroleum hydrocarbons, and chlordane are needed to meet the criteria determined from the calculations. Particle-size analysis did not reveal a clear spatial distribution pattern at Perryville Pond. On average, less than 65 percent of each sample was greater in size than very fine sand. The sample with the highest percentage of clay-sized particles (24.3 percent) was collected just upstream from the dam and generally had the highest concentrations of contaminants determined here. In contrast, more than 90 percent of the sediment samples in the Becket impoundments had grain sizes larger than very fine sand; as determined by direct observation, rocks, cobbles, and boulders constituted a substantial amount of the material impounded at Becket. In general, the highest percentages of the finest particles, clays, occurred in association with the highest concentrations of contaminants. Enzyme-linked immunosorbent assays of the Perryville samples showed the widespread presence of petroleum hydrocarbons (16 out of 26 samples), polycyclic aromatic hydrocarbons (23 out of 26 samples), and chlordane (18 out of 26 samples); polychlorinated biphenyls were detected in five samples from four locations. Neither petroleum hydrocarbons nor polychlorinated biphenyls were detected at Becket, and chlordane was detected in only one sample. All 14 Becket samples contained polycyclic aromatic hydrocarbons. Replicate quality-control analyses revealed consistent results between paired samples. Samples from throughout Perryville Pond contained a number of metals at potentially toxic concentrations. These metals included arsenic, cadmium, copper, lead, nickel, and zinc. At Becket, no metals were found in elevated concentrations. In general, most of the concentrations of organic compounds and metals detected in Perryville Pond exceeded standards for benthic organisms, but only rarely exceeded standards for human contact. The most highly contaminated samples were

  15. Measuring the molecular dimensions of wine tannins: comparison of small-angle X-ray scattering, gel-permeation chromatography and mean degree of polymerization.

    PubMed

    McRae, Jacqui M; Kirby, Nigel; Mertens, Haydyn D T; Kassara, Stella; Smith, Paul A

    2014-07-23

    The molecular size of wine tannins can influence astringency, and yet it has been unclear as to whether the standard methods for determining average tannin molecular weight (MW), including gel-permeation chromatography (GPC) and depolymerization reactions, are actually related to the size of the tannin in wine-like conditions. Small-angle X-ray scattering (SAXS) was therefore used to determine the molecular sizes and corresponding MWs of wine tannin samples from 3 and 7 year old Cabernet Sauvignon wine in a variety of wine-like matrixes: 5-15% and 100% ethanol; 0-200 mM NaCl and pH 3.0-4.0, and compared to those measured using the standard methods. The SAXS results indicated that the tannin samples from the older wine were larger than those of the younger wine and that wine composition did not greatly impact on tannin molecular size. The average tannin MWs as determined by GPC correlated strongly with the SAXS results, suggesting that this method does give a good indication of tannin molecular size in wine-like conditions. The MW as determined from the depolymerization reactions did not correlate as strongly with the SAXS results. To our knowledge, SAXS measurements have not previously been attempted for wine tannins.

  16. Single-image diffusion coefficient measurements of proteins in free solution.

    PubMed

    Zareh, Shannon Kian; DeSantis, Michael C; Kessler, Jonathan M; Li, Je-Luen; Wang, Y M

    2012-04-04

    Diffusion coefficient measurements are important for many biological and material investigations, such as studies of particle dynamics and kinetics, and size determinations. Among current measurement methods, single particle tracking (SPT) offers the unique ability to simultaneously obtain location and diffusion information about a molecule while using only femtomoles of sample. However, the temporal resolution of SPT is limited to seconds for single-color-labeled samples. By directly imaging three-dimensional diffusing fluorescent proteins and studying the widths of their intensity profiles, we were able to determine the proteins' diffusion coefficients using single protein images of submillisecond exposure times. This simple method improves the temporal resolution of diffusion coefficient measurements to submilliseconds, and can be readily applied to a range of particle sizes in SPT investigations and applications in which diffusion coefficient measurements are needed, such as reaction kinetics and particle size determinations. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  17. Population size estimation in Yellowstone wolves with error-prone noninvasive microsatellite genotypes.

    PubMed

    Creel, Scott; Spong, Goran; Sands, Jennifer L; Rotella, Jay; Zeigle, Janet; Joe, Lawrence; Murphy, Kerry M; Smith, Douglas

    2003-07-01

    Determining population sizes can be difficult, but is essential for conservation. By counting distinct microsatellite genotypes, DNA from noninvasive samples (hair, faeces) allows estimation of population size. Problems arise because genotypes from noninvasive samples are error-prone, but genotyping errors can be reduced by multiple polymerase chain reaction (PCR). For faecal genotypes from wolves in Yellowstone National Park, error rates varied substantially among samples, often above the 'worst-case threshold' suggested by simulation. Consequently, a substantial proportion of multilocus genotypes held one or more errors, despite multiple PCR. These genotyping errors created several genotypes per individual and caused overestimation (up to 5.5-fold) of population size. We propose a 'matching approach' to eliminate this overestimation bias.

  18. The size of a pilot study for a clinical trial should be calculated in relation to considerations of precision and efficiency.

    PubMed

    Sim, Julius; Lewis, Martyn

    2012-03-01

    To investigate methods to determine the size of a pilot study to inform a power calculation for a randomized controlled trial (RCT) using an interval/ratio outcome measure. Calculations based on confidence intervals (CIs) for the sample standard deviation (SD). Based on CIs for the sample SD, methods are demonstrated whereby (1) the observed SD can be adjusted to secure the desired level of statistical power in the main study with a specified level of confidence; (2) the sample for the main study, if calculated using the observed SD, can be adjusted, again to obtain the desired level of statistical power in the main study; (3) the power of the main study can be calculated for the situation in which the SD in the pilot study proves to be an underestimate of the true SD; and (4) an "efficient" pilot size can be determined to minimize the combined size of the pilot and main RCT. Trialists should calculate the appropriate size of a pilot study, just as they should the size of the main RCT, taking into account the twin needs to demonstrate efficiency in terms of recruitment and to produce precise estimates of treatment effect. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Role of Sample Processing Strategies at the European Union National Reference Laboratories (NRLs) Concerning the Analysis of Pesticide Residues.

    PubMed

    Hajeb, Parvaneh; Herrmann, Susan S; Poulsen, Mette E

    2017-07-19

    The guidance document SANTE 11945/2015 recommends that cereal samples be milled to a particle size preferably smaller than 1.0 mm and that extensive heating of the samples should be avoided. The aim of the present study was therefore to investigate the differences in milling procedures, obtained particle size distributions, and the resulting pesticide residue recovery when cereal samples were milled at the European Union National Reference Laboratories (NRLs) with their routine milling procedures. A total of 23 NRLs participated in the study. The oat and rye samples milled by each NRL were sent to the European Union Reference Laboratory on Cereals and Feedingstuff (EURL) for the determination of the particle size distribution and pesticide residue recovery. The results showed that the NRLs used several different brands and types of mills. Large variations in the particle size distributions and pesticide extraction efficiencies were observed even between samples milled by the same type of mill.

  20. The effect of grain size and cement content on index properties of weakly solidified artificial sandstones

    NASA Astrophysics Data System (ADS)

    Atapour, Hadi; Mortazavi, Ali

    2018-04-01

    The effects of textural characteristics, especially grain size, on index properties of weakly solidified artificial sandstones are studied. For this purpose, a relatively large number of laboratory tests were carried out on artificial sandstones that were produced in the laboratory. The prepared samples represent fifteen sandstone types consisting of five different median grain sizes and three different cement contents. Indices rock properties including effective porosity, bulk density, point load strength index, and Schmidt hammer values (SHVs) were determined. Experimental results showed that the grain size has significant effects on index properties of weakly solidified sandstones. The porosity of samples is inversely related to the grain size and decreases linearly as grain size increases. While a direct relationship was observed between grain size and dry bulk density, as bulk density increased with increasing median grain size. Furthermore, it was observed that the point load strength index and SHV of samples increased as a result of grain size increase. These observations are indirectly related to the porosity decrease as a function of median grain size.

  1. Storage effects on quantity and composition of dissolved organic carbon and nitrogen of lake water, leaf leachate and peat soil water.

    PubMed

    Heinz, Marlen; Zak, Dominik

    2018-03-01

    This study aimed to evaluate the effects of freezing and cold storage at 4 °C on bulk dissolved organic carbon (DOC) and nitrogen (DON) concentration and SEC fractions determined with size exclusion chromatography (SEC), as well as on spectral properties of dissolved organic matter (DOM) analyzed with fluorescence spectroscopy. In order to account for differences in DOM composition and source we analyzed storage effects for three different sample types, including a lake water sample representing freshwater DOM, a leaf litter leachate of Phragmites australis representing a terrestrial, 'fresh' DOM source and peatland porewater samples. According to our findings one week of cold storage can bias DOC and DON determination. Overall, the determination of DOC and DON concentration with SEC analysis for all three sample types were little susceptible to alterations due to freezing. The findings derived for the sampling locations investigated here may not apply for other sampling locations and/or sample types. However, DOC size fractions and DON concentration of formerly frozen samples should be interpreted with caution when sample concentrations are high. Alteration of some optical properties (HIX and SUVA 254 ) due to freezing were evident, and therefore we recommend immediate analysis of samples for spectral analysis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Sample size considerations using mathematical models: an example with Chlamydia trachomatis infection and its sequelae pelvic inflammatory disease.

    PubMed

    Herzog, Sereina A; Low, Nicola; Berghold, Andrea

    2015-06-19

    The success of an intervention to prevent the complications of an infection is influenced by the natural history of the infection. Assumptions about the temporal relationship between infection and the development of sequelae can affect the predicted effect size of an intervention and the sample size calculation. This study investigates how a mathematical model can be used to inform sample size calculations for a randomised controlled trial (RCT) using the example of Chlamydia trachomatis infection and pelvic inflammatory disease (PID). We used a compartmental model to imitate the structure of a published RCT. We considered three different processes for the timing of PID development, in relation to the initial C. trachomatis infection: immediate, constant throughout, or at the end of the infectious period. For each process we assumed that, of all women infected, the same fraction would develop PID in the absence of an intervention. We examined two sets of assumptions used to calculate the sample size in a published RCT that investigated the effect of chlamydia screening on PID incidence. We also investigated the influence of the natural history parameters of chlamydia on the required sample size. The assumed event rates and effect sizes used for the sample size calculation implicitly determined the temporal relationship between chlamydia infection and PID in the model. Even small changes in the assumed PID incidence and relative risk (RR) led to considerable differences in the hypothesised mechanism of PID development. The RR and the sample size needed per group also depend on the natural history parameters of chlamydia. Mathematical modelling helps to understand the temporal relationship between an infection and its sequelae and can show how uncertainties about natural history parameters affect sample size calculations when planning a RCT.

  3. Laser Diffraction Techniques Replace Sieving for Lunar Soil Particle Size Distribution Data

    NASA Technical Reports Server (NTRS)

    Cooper, Bonnie L.; Gonzalez, C. P.; McKay, D. S.; Fruland, R. L.

    2012-01-01

    Sieving was used extensively until 1999 to determine the particle size distribution of lunar samples. This method is time-consuming, and requires more than a gram of material in order to obtain a result in which one may have confidence. This is demonstrated by the difference in geometric mean and median for samples measured by [1], in which a 14-gram sample produced a geometric mean of approx.52 micrometers, whereas two other samples of 1.5 grams resulted in gave means of approx.63 and approx.69 micrometers. Sample allocations for sieving are typically much smaller than a gram, and many of the sample allocations received by our lab are 0.5 to 0.25 grams in mass. Basu [2] has described how the finest fraction of the soil is easily lost in the sieving process, and this effect is compounded when sample sizes are small.

  4. Microstructural Evaluation of Forging Parameters for Superalloy Disks

    NASA Technical Reports Server (NTRS)

    Falsey, John R.

    2004-01-01

    Forgings of nickel base superalloy were formed under several different strain rates and forging temperatures. Samples were taken from each forging condition to find the ASTM grain size, and the as large as grain (ALA). The specimens were mounted in bakelite, polished, etched and then optical microscopy was used to determine grain size. The specimens ASTM grain sizes from each forging condition were plotted against strain rate, forging temperature, and presoak time. Grain sizes increased with increasing forging temperature. Grain sizes also increased with decreasing strain rates and increasing forging presoak time. The ALA had been determined from each forging condition using the ASTM standard method. Each ALA was compared with the ASTM grain size of each forging condition to determine if the grain sizes were uniform or not. The forging condition of a strain rate of .03/sec and supersolvus heat treatment produced non uniform grains indicated by critical grain growth. Other anomalies are noted as well.

  5. Comparison of sampling and test methods for determining asphalt content and moisture correction in asphalt concrete mixtures.

    DOT National Transportation Integrated Search

    1985-03-01

    The purpose of this report is to identify the difference, if any, in AASHTO and OSHD test procedures and results. This report addresses the effect of the size of samples taken in the field and evaluates the methods of determining the moisture content...

  6. Study of phase transformation and microstructure of alcohol washed titania nanoparticles for thermal stability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaur, Manpreet, E-mail: manpreet.kaur@thapar.edu; Singh, Gaganjot; Bimbraw, Keshav

    Nanostructured titania have been successfully synthesized by hydrolysis of alkoxide at calcination temperatures 500 °C, 600 °C and 700 °C. As the calcination temperature increases, alcohol washed samples show lesser rutile content as compared to water washed samples. Morphology and Particle sizes was determined by field emission scanning electron microscopy (FESEM), while thermogravimetric-differential scanning calorimetry (TG-DSC) was used to determine thermal stability. Alcohol washed samples undergo 30% weight loss whereas 16% in water washed samples was observed. The mean particle sizes were found to be increase from 37 nm to 100.9 nm and 35.3 nm to 55.2 nm for water and alcohol washed samplesmore » respectively. Hydrolysis of alkoxide was shown to be an effective means to prepare thermally stable titania by using alcohol washed samples as a precursor.« less

  7. Apparatus and method for the determination of grain size in thin films

    DOEpatents

    Maris, Humphrey J

    2000-01-01

    A method for the determination of grain size in a thin film sample comprising the steps of measuring first and second changes in the optical response of the thin film, comparing the first and second changes to find the attenuation of a propagating disturbance in the film and associating the attenuation of the disturbance to the grain size of the film. The second change in optical response is time delayed from the first change in optical response.

  8. Apparatus and method for the determination of grain size in thin films

    DOEpatents

    Maris, Humphrey J

    2001-01-01

    A method for the determination of grain size in a thin film sample comprising the steps of measuring first and second changes in the optical response of the thin film, comparing the first and second changes to find the attenuation of a propagating disturbance in the film and associating the attenuation of the disturbance to the grain size of the film. The second change in optical response is time delayed from the first change in optical response.

  9. Particle size fractionation of paralytic shellfish toxins (PSTs): seasonal distribution and bacterial production in the St Lawrence estuary, Canada.

    PubMed

    Michaud, S; Levasseur, M; Doucette, G; Cantin, G

    2002-10-01

    We determined the seasonal distribution of paralytic shellfish toxins (PSTs) and PST producing bacteria in > 15, 5-15, and 0.22-5 microm size fractions in the St Lawrence. We also measured PSTs in a local population of Mytilus edulis. PST concentrations were determined in each size fraction and in laboratory incubations of sub-samples by high performance liquid chromatography (HPLC), including the rigorous elimination of suspected toxin 'imposter' peaks. Mussel toxin levels were determined by mouse bioassay and HPLC. PSTs were detected in all size fractions during the summer sampling season, with 47% of the water column toxin levels associated with particles smaller than Alexandrium tamarense (< 15 microm). Even in the > 15 microm size fraction, we estimated that as much as 92% of PSTs could be associated with particles other than A. tamarense. Our results stress the importance of taking into account the potential presence of PSTs in size fractions other than that containing the known algal producer when attempting to model shellfish intoxication, especially during years of low cell abundance. Finally, our HPLC results confirmed the presence of bacteria capable of autonomous PST production in the St Lawrence as well as demonstrating their regular presence and apparent diversity in the plankton. Copyright 2002 Elsevier Science Ltd.

  10. On sample size and different interpretations of snow stability datasets

    NASA Astrophysics Data System (ADS)

    Schirmer, M.; Mitterer, C.; Schweizer, J.

    2009-04-01

    Interpretations of snow stability variations need an assessment of the stability itself, independent of the scale investigated in the study. Studies on stability variations at a regional scale have often chosen stability tests such as the Rutschblock test or combinations of various tests in order to detect differences in aspect and elevation. The question arose: ‘how capable are such stability interpretations in drawing conclusions'. There are at least three possible errors sources: (i) the variance of the stability test itself; (ii) the stability variance at an underlying slope scale, and (iii) that the stability interpretation might not be directly related to the probability of skier triggering. Various stability interpretations have been proposed in the past that provide partly different results. We compared a subjective one based on expert knowledge with a more objective one based on a measure derived from comparing skier-triggered slopes vs. slopes that have been skied but not triggered. In this study, the uncertainties are discussed and their effects on regional scale stability variations will be quantified in a pragmatic way. An existing dataset with very large sample sizes was revisited. This dataset contained the variance of stability at a regional scale for several situations. The stability in this dataset was determined using the subjective interpretation scheme based on expert knowledge. The question to be answered was how many measurements were needed to obtain similar results (mainly stability differences in aspect or elevation) as with the complete dataset. The optimal sample size was obtained in several ways: (i) assuming a nominal data scale the sample size was determined with a given test, significance level and power, and by calculating the mean and standard deviation of the complete dataset. With this method it can also be determined if the complete dataset consists of an appropriate sample size. (ii) Smaller subsets were created with similar aspect distributions to the large dataset. We used 100 different subsets for each sample size. Statistical variations obtained in the complete dataset were also tested on the smaller subsets using the Mann-Whitney or the Kruskal-Wallis test. For each subset size, the number of subsets were counted in which the significance level was reached. For these tests no nominal data scale was assumed. (iii) For the same subsets described above, the distribution of the aspect median was determined. A count of how often this distribution was substantially different from the distribution obtained with the complete dataset was made. Since two valid stability interpretations were available (an objective and a subjective interpretation as described above), the effect of the arbitrary choice of the interpretation on spatial variability results was tested. In over one third of the cases the two interpretations came to different results. The effect of these differences were studied in a similar method as described in (iii): the distribution of the aspect median was determined for subsets of the complete dataset using both interpretations, compared against each other as well as to the results of the complete dataset. For the complete dataset the two interpretations showed mainly identical results. Therefore the subset size was determined from the point at which the results of the two interpretations converged. A universal result for the optimal subset size cannot be presented since results differed between different situations contained in the dataset. The optimal subset size is thus dependent on stability variation in a given situation, which is unknown initially. There are indications that for some situations even the complete dataset might be not large enough. At a subset size of approximately 25, the significant differences between aspect groups (as determined using the whole dataset) were only obtained in one out of five situations. In some situations, up to 20% of the subsets showed a substantially different distribution of the aspect median. Thus, in most cases, 25 measurements (which can be achieved by six two-person teams in one day) did not allow to draw reliable conclusions.

  11. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments

    PubMed Central

    2013-01-01

    Background Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. Results To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations. The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. Conclusions We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs. PMID:24160725

  12. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    PubMed

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  13. 40 CFR 796.2750 - Sediment and soil adsorption isotherm.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... size analysis” is the determination of the various amounts of the different particle sizes in a sample... °C. (iii) Replications. Three replications of the experimental treatments shall be used. (iv) Soil...) Decrease the water content, air or oven-dry soils at or below 50 °C. (B) Reduce aggregate size before and...

  14. 40 CFR 796.2750 - Sediment and soil adsorption isotherm.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... size analysis” is the determination of the various amounts of the different particle sizes in a sample... °C. (iii) Replications. Three replications of the experimental treatments shall be used. (iv) Soil...) Decrease the water content, air or oven-dry soils at or below 50 °C. (B) Reduce aggregate size before and...

  15. Epistemological Issues in Astronomy Education Research: How Big of a Sample is "Big Enough"?

    NASA Astrophysics Data System (ADS)

    Slater, Stephanie; Slater, T. F.; Souri, Z.

    2012-01-01

    As astronomy education research (AER) continues to evolve into a sophisticated enterprise, we must begin to grapple with defining our epistemological parameters. Moreover, as we attempt to make pragmatic use of our findings, we must make a concerted effort to communicate those parameters in a sensible way to the larger astronomical community. One area of much current discussion involves a basic discussion of methodologies, and subsequent sample sizes, that should be considered appropriate for generating knowledge in the field. To address this question, we completed a meta-analysis of nearly 1,000 peer-reviewed studies published in top tier professional journals. Data related to methodologies and sample sizes were collected from "hard science” and "human science” journals to compare the epistemological systems of these two bodies of knowledge. Working back in time from August 2011, the 100 most recent studies reported in each journal were used as a data source: Icarus, ApJ and AJ, NARST, IJSE and SciEd. In addition, data was collected from the 10 most recent AER dissertations, a set of articles determined by the science education community to be the most influential in the field, and the nearly 400 articles used as reference materials for the NRC's Taking Science to School. Analysis indicates these bodies of knowledge have a great deal in common; each relying on a large variety of methodologies, and each building its knowledge through studies that proceed from surprisingly low sample sizes. While both fields publish a small percentage of studies with large sample sizes, the vast majority of top tier publications consist of rich studies of a small number of objects. We conclude that rigor in each field is determined not by a circumscription of methodologies and sample sizes, but by peer judgments that the methods and sample sizes are appropriate to the research question.

  16. Chandra Observations of Three Newly Discovered Quadruply Gravitationally Lensed Quasars

    NASA Astrophysics Data System (ADS)

    Pooley, David

    2017-09-01

    Our previous work has shown the unique power of Chandra observations of quadruply gravitationally lensed quasars to address several fundamental astrophysical issues. We have used these observations to (1) determine the cause of flux ratio anomalies, (2) measure the sizes of quasar accretion disks, (3) determine the dark matter content of the lensing galaxies, and (4) measure the stellar mass-to-light ratio (in fact, this is the only way to measure the stellar mass-to-light ratio beyond the solar neighborhood). In all cases, the main source of uncertainty in our results is the small size of the sample of known quads; only 15 systems are available for study with Chandra. We propose Chandra observations of three newly discovered quads, increasing the sample size by 20%

  17. Size measuring techniques as tool to monitor pea proteins intramolecular crosslinking by transglutaminase treatment.

    PubMed

    Djoullah, Attaf; Krechiche, Ghali; Husson, Florence; Saurel, Rémi

    2016-01-01

    In this work, techniques for monitoring the intramolecular transglutaminase cross-links of pea proteins, based on protein size determination, were developed. Sodium dodecyl sulfate-polyacrylamide gel electrophoresis profiles of transglutaminase-treated low concentration (0.01% w/w) pea albumin samples, compared to the untreated one (control), showed a higher electrophoretic migration of the major albumin fraction band (26 kDa), reflecting a decrease in protein size. This protein size decrease was confirmed, after DEAE column purification, by dynamic light scattering (DLS) where the hydrodynamic radius of treated samples appears to be reduced compared to the control one. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Geotechnical characterization of mined clay from Appalachian Ohio: challenges and implications for the clay mining industry.

    PubMed

    Moran, Anthony R; Hettiarachchi, Hiroshan

    2011-07-01

    Clayey soil found in coal mines in Appalachian Ohio is often sold to landfills for constructing Recompacted Soil Liners (RSL) in landfills. Since clayey soils possess low hydraulic conductivity, the suitability of mined clay for RSL in Ohio is first assessed by determining its clay content. When soil samples are tested in a laboratory, the same engineering properties are typically expected for the soils originated from the same source, provided that the testing techniques applied are standard, but mined clay from Appalachian Ohio has shown drastic differences in particle size distribution depending on the sampling and/or laboratory processing methods. Sometimes more than a 10 percent decrease in the clay content is observed in the samples collected at the stockpiles, compared to those collected through reverse circulation drilling. This discrepancy poses a challenge to geotechnical engineers who work on the prequalification process of RSL material as it can result in misleading estimates of the hydraulic conductivity of the samples. This paper describes a laboratory investigation conducted on mined clay from Appalachian Ohio to determine how and why the standard sampling and/or processing methods can affect the grain-size distributions. The variation in the clay content was determined to be due to heavy concentrations of shale fragments in the clayey soils. It was also concluded that, in order to obtain reliable grain size distributions from the samples collected at a stockpile of mined clay, the material needs to be processed using a soil grinder. Otherwise, the samples should be collected through drilling.

  19. Geotechnical Characterization of Mined Clay from Appalachian Ohio: Challenges and Implications for the Clay Mining Industry

    PubMed Central

    Moran, Anthony R.; Hettiarachchi, Hiroshan

    2011-01-01

    Clayey soil found in coal mines in Appalachian Ohio is often sold to landfills for constructing Recompacted Soil Liners (RSL) in landfills. Since clayey soils possess low hydraulic conductivity, the suitability of mined clay for RSL in Ohio is first assessed by determining its clay content. When soil samples are tested in a laboratory, the same engineering properties are typically expected for the soils originated from the same source, provided that the testing techniques applied are standard, but mined clay from Appalachian Ohio has shown drastic differences in particle size distribution depending on the sampling and/or laboratory processing methods. Sometimes more than a 10 percent decrease in the clay content is observed in the samples collected at the stockpiles, compared to those collected through reverse circulation drilling. This discrepancy poses a challenge to geotechnical engineers who work on the prequalification process of RSL material as it can result in misleading estimates of the hydraulic conductivity of the samples. This paper describes a laboratory investigation conducted on mined clay from Appalachian Ohio to determine how and why the standard sampling and/or processing methods can affect the grain-size distributions. The variation in the clay content was determined to be due to heavy concentrations of shale fragments in the clayey soils. It was also concluded that, in order to obtain reliable grain size distributions from the samples collected at a stockpile of mined clay, the material needs to be processed using a soil grinder. Otherwise, the samples should be collected through drilling. PMID:21845150

  20. Variation in aluminum, iron, and particle concentrations in oxic groundwater samples collected by use of tangential-flow ultrafiltration with low-flow sampling

    NASA Astrophysics Data System (ADS)

    Szabo, Zoltan; Oden, Jeannette H.; Gibs, Jacob; Rice, Donald E.; Ding, Yuan

    2002-02-01

    Particulates that move with ground water and those that are artificially mobilized during well purging could be incorporated into water samples during collection and could cause trace-element concentrations to vary in unfiltered samples, and possibly in filtered samples (typically 0.45-um (micron) pore size) as well, depending on the particle-size fractions present. Therefore, measured concentrations may not be representative of those in the aquifer. Ground water may contain particles of various sizes and shapes that are broadly classified as colloids, which do not settle from water, and particulates, which do. In order to investigate variations in trace-element concentrations in ground-water samples as a function of particle concentrations and particle-size fractions, the U.S. Geological Survey, in cooperation with the U.S. Air Force, collected samples from five wells completed in the unconfined, oxic Kirkwood-Cohansey aquifer system of the New Jersey Coastal Plain. Samples were collected by purging with a portable pump at low flow (0.2-0.5 liters per minute and minimal drawdown, ideally less than 0.5 foot). Unfiltered samples were collected in the following sequence: (1) within the first few minutes of pumping, (2) after initial turbidity declined and about one to two casing volumes of water had been purged, and (3) after turbidity values had stabilized at less than 1 to 5 Nephelometric Turbidity Units. Filtered samples were split concurrently through (1) a 0.45-um pore size capsule filter, (2) a 0.45-um pore size capsule filter and a 0.0029-um pore size tangential-flow filter in sequence, and (3), in selected cases, a 0.45-um and a 0.05-um pore size capsule filter in sequence. Filtered samples were collected concurrently with the unfiltered sample that was collected when turbidity values stabilized. Quality-assurance samples consisted of sequential duplicates (about 25 percent) and equipment blanks. Concentrations of particles were determined by light scattering.

  1. Preconcentration and speciation of ultra-trace Se (IV) and Se (VI) in environmental water samples with nano-sized TiO2 colloid and determination by HG-AFS.

    PubMed

    Fu, Jiaqi; Zhang, Xu; Qian, Shahua; Zhang, Lin

    2012-05-30

    A united method for speciation analysis of Se (IV) and Se (VI) in environmental water samples was developed using nano-sized TiO(2) colloid as adsorbent and hydride generation atomic fluorescence spectrometry (HG-AFS) as determination means. When the pH values of bulk solution were between 6.0 and 7.0, successful adsorption onto 1 mL nano-sized TiO(2) colloid (0.2%) was achieved for more than 97.0% of Se (IV) while Se (VI) barely got adsorbed. Therefore, the method made it possible to preconcentrate and determine Se (IV) and Se (VI) separately. The precipitated TiO(2) with concentrated selenium was directly converted to colloid without desorption. Selenium in the resulting colloid was then determined by HG-AFS. The detection limits (3σ) and relative standard deviations (R.S.D) of this method were 24 ng/L and 42 ng/L, 7.8% (n=6) and 7.0% (n=6) for Se (IV) and Se (VI), respectively. This simple, sensitive, and united method was successfully applied to the separation and speciation of ultra-trace Se (IV) and Se (VI) in environmental water samples. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. 40 CFR 761.243 - Standard wipe sample method and size.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., AND USE PROHIBITIONS Determining a PCB Concentration for Purposes of Abandonment or Disposal of Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe.../Rinse Cleanup as Recommended by the Environmental Protection Agency PCB Spill Cleanup Policy,” dated...

  3. Fish habitat conditions: Using the Northern/Intermountain Regions' inventory procedures for detecting differences on two differently managed watersheds

    Treesearch

    C. Kerry Overton; Michael A. Radko; Rodger L. Nelson

    1993-01-01

    Differences in fish habitat variables between two studied watersheds may be related to differences in land management. In using the R1/R4 Watershed-Scale Fish Habitat Inventory Process, for most habitat variables, evaluations of sample sizes of at least 30 habitat units were adequate. Guidelines will help land managers in determining sample sizes required to detect...

  4. 40 CFR 761.304 - Determining sample location.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... (a) For 1 square meter non-porous surface areas having the same size and shape, it is permissible to sample the same 10 cm by 10 cm location or position in each identical 1 square meter area. This location or position is determined in accordance with § 761.306 or § 761.308. (b) If some 1 square meter...

  5. 40 CFR 761.304 - Determining sample location.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... (a) For 1 square meter non-porous surface areas having the same size and shape, it is permissible to sample the same 10 cm by 10 cm location or position in each identical 1 square meter area. This location or position is determined in accordance with § 761.306 or § 761.308. (b) If some 1 square meter...

  6. 40 CFR 761.304 - Determining sample location.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... (a) For 1 square meter non-porous surface areas having the same size and shape, it is permissible to sample the same 10 cm by 10 cm location or position in each identical 1 square meter area. This location or position is determined in accordance with § 761.306 or § 761.308. (b) If some 1 square meter...

  7. 40 CFR 761.304 - Determining sample location.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... (a) For 1 square meter non-porous surface areas having the same size and shape, it is permissible to sample the same 10 cm by 10 cm location or position in each identical 1 square meter area. This location or position is determined in accordance with § 761.306 or § 761.308. (b) If some 1 square meter...

  8. 40 CFR 761.304 - Determining sample location.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... (a) For 1 square meter non-porous surface areas having the same size and shape, it is permissible to sample the same 10 cm by 10 cm location or position in each identical 1 square meter area. This location or position is determined in accordance with § 761.306 or § 761.308. (b) If some 1 square meter...

  9. Sample size determinations for group-based randomized clinical trials with different levels of data hierarchy between experimental and control arms.

    PubMed

    Heo, Moonseong; Litwin, Alain H; Blackstock, Oni; Kim, Namhee; Arnsten, Julia H

    2017-02-01

    We derived sample size formulae for detecting main effects in group-based randomized clinical trials with different levels of data hierarchy between experimental and control arms. Such designs are necessary when experimental interventions need to be administered to groups of subjects whereas control conditions need to be administered to individual subjects. This type of trial, often referred to as a partially nested or partially clustered design, has been implemented for management of chronic diseases such as diabetes and is beginning to emerge more commonly in wider clinical settings. Depending on the research setting, the level of hierarchy of data structure for the experimental arm can be three or two, whereas that for the control arm is two or one. Such different levels of data hierarchy assume correlation structures of outcomes that are different between arms, regardless of whether research settings require two or three level data structure for the experimental arm. Therefore, the different correlations should be taken into account for statistical modeling and for sample size determinations. To this end, we considered mixed-effects linear models with different correlation structures between experimental and control arms to theoretically derive and empirically validate the sample size formulae with simulation studies.

  10. The determination of specific forms of aluminum in natural water

    USGS Publications Warehouse

    Barnes, R.B.

    1975-01-01

    A procedure for analysis and pretreatment of natural-water samples to determine very low concentrations of Al is described which distinguishes the rapidly reacting equilibrium species from the metastable or slowly reacting macro ions and colloidal suspended material. Aluminum is complexed with 8-hydroxyquinoline (oxine), pH is adjusted to 8.3 to minimize interferences, and the aluminum oxinate is extracted with methyl isobutyl ketone (MIBK) prior to analysis by atomic absorption. To determine equilibrium species only, the contact time between sample and 8-hydroxyquinoline is minimized. The Al may be extracted at the sample site with a minimum of equipment and the MIBK extract stored for several weeks prior to atomic absorption analysis. Data obtained from analyses of 39 natural groundwater samples indicate that filtration through a 0.1-??m pore size filter is not an adequate means of removing all insoluble and metastable Al species present, and extraction of Al immediately after collection is necessary if only dissolved and readily reactive species are to be determined. An average of 63% of the Al present in natural waters that had been filtered through 0.1-??m pore size filters was in the form of monomeric ions. The total Al concentration, which includes all forms that passed through a 0.1-??m pore size filter, ranged 2-70 ??g/l. The concentration of Al in the form of monomeric ions ranged from below detection to 57 ??g/l. Most of the natural water samples used in this study were collected from thermal springs and oil wells. ?? 1975.

  11. Are catchment-wide erosion rates really "Catchment-Wide"? Effects of grain size on erosion rates determined from 10Be

    NASA Astrophysics Data System (ADS)

    Reitz, M. A.; Seeber, L.; Schaefer, J. M.; Ferguson, E. K.

    2012-12-01

    Early studies pioneering the method for catchment wide erosion rates by measuring 10Be in alluvial sediment were taken at river mouths and used the sand size grain fraction from the riverbeds in order to average upstream erosion rates and measure erosion patterns. Finer particles (<0.0625 mm) were excluded to reduce the possibility of a wind-blown component of sediment and coarser particles (>2 mm) were excluded to better approximate erosion from the entire upstream catchment area (coarse grains are generally found near the source). Now that the sensitivity of 10Be measurements is rapidly increasing, we can precisely measure erosion rates from rivers eroding active tectonic regions. These active regions create higher energy drainage systems that erode faster and carry coarser sediment. In these settings, does the sand-sized fraction fully capture the average erosion of the upstream drainage area? Or does a different grain size fraction provide a more accurate measure of upstream erosion? During a study of the Neto River in Calabria, southern Italy, we took 8 samples along the length of the river, focusing on collecting samples just below confluences with major tributaries, in order to use the high-resolution erosion rate data to constrain tectonic motion. The samples we measured were sieved to either a 0.125 mm - 0.710 mm fraction or the 0.125 mm - 4 mm fraction (depending on how much of the former was available). After measuring these 8 samples for 10Be and determining erosion rates, we used the approach by Granger et al. [1996] to calculate the subcatchment erosion rates between each sample point. In the subcatchments of the river where we used grain sizes up to 4 mm, we measured very low 10Be concentrations (corresponding to high erosion rates) and calculated nonsensical subcatchment erosion rates (i.e. negative rates). We, therefore, hypothesize that the coarser grain sizes we included are preferentially sampling a smaller upstream area, and not the entire upstream catchment, which is assumed when measurements are based solely on the sand sized fraction. To test this hypothesis, we used samples with a variety of grain sizes from the Shillong Plateau. We sieved 5 samples into three grain size fractions: 0.125 mm - 710 mm, 710 mm - 4 mm, and >4 mm and measured 10Be concentrations in each fraction. Although there is some variation in the grain size fraction that yields the highest erosion rate, generally, the coarser grain size fractions have higher erosion rates. More significant are the results when calculating the subcatchment erosion rates, which suggest that even medium sized grains (710 mm - 4 mm) are sampling an area smaller than the entire upstream area; this finding is consistent with the nonsensical results from the Neto River study. This result has numerous implications for the interpretations of 10Be erosion rates: most importantly, an alluvial sample may not be averaging the entire upstream area, even when using the sand size fraction - resulting erosion rates more pertinent for that sample point rather than the entire catchment.

  12. The late Neandertal supraorbital fossils from Vindija Cave, Croatia: a biased sample?

    PubMed

    Ahern, James C M; Lee, Sang-Hee; Hawks, John D

    2002-09-01

    The late Neandertal sample from Vindija (Croatia) has been described as transitional between the earlier Central European Neandertals from Krapina (Croatia) and modern humans. However, the morphological differences indicating this transition may rather be the result of different sex and/or age compositions between the samples. This study tests the hypothesis that the metric differences between the Krapina and Vindija supraorbital samples are due to sampling bias. We focus upon the supraorbital region because past studies have posited this region as particularly indicative of the Vindija sample's transitional nature. Furthermore, the supraorbital region varies significantly with both age and sex. We analyzed four chords and two derived indices of supraorbital torus form as defined by Smith & Ranyard (1980, Am. J. phys. Anthrop.93, pp. 589-610). For each variable, we analyzed relative sample bias of the Krapina and Vindija samples using three sampling methods. In order to test the hypothesis that the Vindija sample contains an over-representation of females and/or young while the Krapina sample is normal or also female/young biased, we determined the probability of drawing a sample of the same size as and with a mean equal to or less than Vindija's from a Krapina-based population. In order to test the hypothesis that the Vindija sample is female/young biased while the Krapina sample is male/old biased, we determined the probability of drawing a sample of the same size as and with a mean equal or less than Vindija's from a generated population whose mean is halfway between Krapina's and Vindija's. Finally, in order to test the hypothesis that the Vindija sample is normal while the Krapina sample contains an over-representation of males and/or old, we determined the probability of drawing a sample of the same size as and with a mean equal to or greater than Krapina's from a Vindija-based population. Unless we assume that the Vindija sample is female/young and the Krapina sample is male/old biased, our results falsify the hypothesis that the metric differences between the Krapina and Vindija samples are due to sample bias.

  13. How Sample Size Affects a Sampling Distribution

    ERIC Educational Resources Information Center

    Mulekar, Madhuri S.; Siegel, Murray H.

    2009-01-01

    If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…

  14. Macrozooplankton biomass in a warm-core Gulf Stream ring: Time series changes in size structure, taxonomic composition, and vertical distribution

    NASA Astrophysics Data System (ADS)

    Davis, Cabell S.; Wiebe, Peter H.

    1985-01-01

    Macrozooplankton size structure and taxonomic composition in warm-core ring 82B was examined from a time series (March, April, June) of ring center MOCNESS (1 m) samples. Size distributions of 15 major taxonomic groups were determined from length measurements digitized from silhouette photographs of the samples. Silhouette digitization allows rapid quantification of Zooplankton size structure and taxonomic composition. Length/weight regressions, determined for each taxon, were used to partition the biomass (displacement volumes) of each sample among the major taxonomic groups. Zooplankton taxonomic composition and size structure varied with depth and appeared to coincide with the hydrographic structure of the ring. In March and April, within the thermostad region of the ring, smaller herbivorous/omnivorous Zooplankton, including copepods, crustacean larvae, and euphausiids, were dominant, whereas below this region, larger carnivores, such as medusae, ctenophores, fish, and decapods, dominated. Copepods were generally dominant in most samples above 500 m. Total macrozooplankton abundance and biomass increased between March and April, primarily because of increases in herbivorous taxa, including copepods, crustacean larvae, and larvaceans. A marked increase in total macrozooplankton abundance and biomass between April and June was characterized by an equally dramatic shift from smaller herbivores (1.0-3.0 mm) in April to large herbivores (5.0-6.0 mm) and carnivores (>15 mm) in June. Species identifications made directly from the samples suggest that changes in trophic structure resulted from seeding type immigration and subsequent in situ population growth of Slope Water zooplankton species.

  15. Field sampling of loose erodible material: A new method to consider the full particle-size range

    NASA Astrophysics Data System (ADS)

    Klose, Martina; Gill, Thomas E.

    2017-04-01

    The aerodynamic entrainment of sand and dust is determined by the atmospheric forces exerted onto the soil surface and by the soil-surface condition. If aerodynamic forces are strong enough to generate sand and dust lifting, the entrained sediment amount still critically depends on the supply of loose particles readily available for lifting. This loose erodible material (LEM) is sometimes defined as the thin layer of loose particles on top of a crusted surface. Here, we more generally define LEM as loose particles or particle aggregates available for entrainment, which may or may not overlay a soil crust. Field sampling of LEM is difficult and only few attempts have been made. Motivated by saltation as the most efficient process to generate dust emission, methods have focused on capturing LEM in the sand-size range or on determining the potential of a soil surface to be eroded by aerodynamic forces and particle impacts. Here, our focus is to capture the full particle-size distribution of LEM in situ, including the dust and sand-size range, to investigate the potential and likelihood of dust emission mechanisms (aerodynamic entrainment, saltation bombardment, aggregate disintegration) to occur. A new vacuum method is introduced and its capability to sample LEM without significant alteration of the LEM particle-size distribution is investigated.

  16. Determining the sample size for co-dominant molecular marker-assisted linkage detection for a monogenic qualitative trait by controlling the type-I and type-II errors in a segregating F2 population.

    PubMed

    Hühn, M; Piepho, H P

    2003-03-01

    Tests for linkage are usually performed using the lod score method. A critical question in linkage analyses is the choice of sample size. The appropriate sample size depends on the desired type-I error and power of the test. This paper investigates the exact type-I error and power of the lod score method in a segregating F(2) population with co-dominant markers and a qualitative monogenic dominant-recessive trait. For illustration, a disease-resistance trait is considered, where the susceptible allele is recessive. A procedure is suggested for finding the appropriate sample size. It is shown that recessive plants have about twice the information content of dominant plants, so the former should be preferred for linkage detection. In some cases the exact alpha-values for a given nominal alpha may be rather small due to the discrete nature of the sampling distribution in small samples. We show that a gain in power is possible by using exact methods.

  17. EPICS Controlled Collimator for Controlling Beam Sizes in HIPPO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Napolitano, Arthur Soriano; Vogel, Sven C.

    2017-08-03

    Controlling the beam spot size and shape in a diffraction experiment determines the probed sample volume. The HIPPO - High-Pressure-Preferred Orientation– neutron time-offlight diffractometer is located at the Lujan Neutron Scattering Center in Los Alamos National Laboratories. HIPPO characterizes microstructural parameters, such as phase composition, strains, grain size, or texture, of bulk (cm-sized) samples. In the current setup, the beam spot has a 10 mm diameter. Using a collimator, consisting of two pairs of neutron absorbing boron-nitride slabs, horizontal and vertical dimensions of a rectangular beam spot can be defined. Using the HIPPO robotic sample changer for sample motion, themore » collimator would enable scanning of e.g. cylindrical samples along the cylinder axis by probing slices of such samples. The project presented here describes implementation of such a collimator, in particular the motion control software. We utilized the EPICS (Experimental Physics Interface and Control System) software interface to integrate the collimator control into the HIPPO instrument control system. Using EPICS, commands are sent to commercial stepper motors that move the beam windows.« less

  18. Technical note: Alternatives to reduce adipose tissue sampling bias.

    PubMed

    Cruz, G D; Wang, Y; Fadel, J G

    2014-10-01

    Understanding the mechanisms by which nutritional and pharmaceutical factors can manipulate adipose tissue growth and development in production animals has direct and indirect effects in the profitability of an enterprise. Adipocyte cellularity (number and size) is a key biological response that is commonly measured in animal science research. The variability and sampling of adipocyte cellularity within a muscle has been addressed in previous studies, but no attempt to critically investigate these issues has been proposed in the literature. The present study evaluated 2 sampling techniques (random and systematic) in an attempt to minimize sampling bias and to determine the minimum number of samples from 1 to 15 needed to represent the overall adipose tissue in the muscle. Both sampling procedures were applied on adipose tissue samples dissected from 30 longissimus muscles from cattle finished either on grass or grain. Briefly, adipose tissue samples were fixed with osmium tetroxide, and size and number of adipocytes were determined by a Coulter Counter. These results were then fit in a finite mixture model to obtain distribution parameters of each sample. To evaluate the benefits of increasing number of samples and the advantage of the new sampling technique, the concept of acceptance ratio was used; simply stated, the higher the acceptance ratio, the better the representation of the overall population. As expected, a great improvement on the estimation of the overall adipocyte cellularity parameters was observed using both sampling techniques when sample size number increased from 1 to 15 samples, considering both techniques' acceptance ratio increased from approximately 3 to 25%. When comparing sampling techniques, the systematic procedure slightly improved parameters estimation. The results suggest that more detailed research using other sampling techniques may provide better estimates for minimum sampling.

  19. Improving the accuracy of sediment-associated constituent concentrations in whole storm water samples by wet-sieving

    USGS Publications Warehouse

    Selbig, W.R.; Bannerman, R.; Bowman, G.

    2007-01-01

    Sand-sized particles (>63 ??m) in whole storm water samples collected from urban runoff have the potential to produce data with substantial bias and/or poor precision both during sample splitting and laboratory analysis. New techniques were evaluated in an effort to overcome some of the limitations associated with sample splitting and analyzing whole storm water samples containing sand-sized particles. Wet-sieving separates sand-sized particles from a whole storm water sample. Once separated, both the sieved solids and the remaining aqueous (water suspension of particles less than 63 ??m) samples were analyzed for total recoverable metals using a modification of USEPA Method 200.7. The modified version digests the entire sample, rather than an aliquot, of the sample. Using a total recoverable acid digestion on the entire contents of the sieved solid and aqueous samples improved the accuracy of the derived sediment-associated constituent concentrations. Concentration values of sieved solid and aqueous samples can later be summed to determine an event mean concentration. ?? ASA, CSSA, SSSA.

  20. Variance Estimation, Design Effects, and Sample Size Calculations for Respondent-Driven Sampling

    PubMed Central

    2006-01-01

    Hidden populations, such as injection drug users and sex workers, are central to a number of public health problems. However, because of the nature of these groups, it is difficult to collect accurate information about them, and this difficulty complicates disease prevention efforts. A recently developed statistical approach called respondent-driven sampling improves our ability to study hidden populations by allowing researchers to make unbiased estimates of the prevalence of certain traits in these populations. Yet, not enough is known about the sample-to-sample variability of these prevalence estimates. In this paper, we present a bootstrap method for constructing confidence intervals around respondent-driven sampling estimates and demonstrate in simulations that it outperforms the naive method currently in use. We also use simulations and real data to estimate the design effects for respondent-driven sampling in a number of situations. We conclude with practical advice about the power calculations that are needed to determine the appropriate sample size for a study using respondent-driven sampling. In general, we recommend a sample size twice as large as would be needed under simple random sampling. PMID:16937083

  1. In Search of the Largest Possible Tsunami: An Example Following the 2011 Japan Tsunami

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Parsons, T.

    2012-12-01

    Many tsunami hazard assessments focus on estimating the largest possible tsunami: i.e., the worst-case scenario. This is typically performed by examining historic and prehistoric tsunami data or by estimating the largest source that can produce a tsunami. We demonstrate that worst-case assessments derived from tsunami and tsunami-source catalogs are greatly affected by sampling bias. Both tsunami and tsunami sources are well represented by a Pareto distribution. It is intuitive to assume that there is some limiting size (i.e., runup or seismic moment) for which a Pareto distribution is truncated or tapered. Likelihood methods are used to determine whether a limiting size can be determined from existing catalogs. Results from synthetic catalogs indicate that several observations near the limiting size are needed for accurate parameter estimation. Accordingly, the catalog length needed to empirically determine the limiting size is dependent on the difference between the limiting size and the observation threshold, with larger catalog lengths needed for larger limiting-threshold size differences. Most, if not all, tsunami catalogs and regional tsunami source catalogs are of insufficient length to determine the upper bound on tsunami runup. As an example, estimates of the empirical tsunami runup distribution are obtained from the Miyako tide gauge station in Japan, which recorded the 2011 Tohoku-oki tsunami as the largest tsunami among 51 other events. Parameter estimation using a tapered Pareto distribution is made both with and without the Tohoku-oki event. The catalog without the 2011 event appears to have a low limiting tsunami runup. However, this is an artifact of undersampling. Including the 2011 event, the catalog conforms more to a pure Pareto distribution with no confidence in estimating a limiting runup. Estimating the size distribution of regional tsunami sources is subject to the same sampling bias. Physical attenuation mechanisms such as wave breaking likely limit the maximum tsunami runup at a particular site. However, historic and prehistoric data alone cannot determine the upper bound on tsunami runup. Because of problems endemic to sampling Pareto distributions of tsunamis and their sources, we recommend that tsunami hazard assessment be based on a specific design probability of exceedance following a pure Pareto distribution, rather than attempting to determine the worst-case scenario.

  2. Accounting for Incomplete Species Detection in Fish Community Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McManamay, Ryan A; Orth, Dr. Donald J; Jager, Yetta

    2013-01-01

    Riverine fish assemblages are heterogeneous and very difficult to characterize with a one-size-fits-all approach to sampling. Furthermore, detecting changes in fish assemblages over time requires accounting for variation in sampling designs. We present a modeling approach that permits heterogeneous sampling by accounting for site and sampling covariates (including method) in a model-based framework for estimation (versus a sampling-based framework). We snorkeled during three surveys and electrofished during a single survey in suite of delineated habitats stratified by reach types. We developed single-species occupancy models to determine covariates influencing patch occupancy and species detection probabilities whereas community occupancy models estimated speciesmore » richness in light of incomplete detections. For most species, information-theoretic criteria showed higher support for models that included patch size and reach as covariates of occupancy. In addition, models including patch size and sampling method as covariates of detection probabilities also had higher support. Detection probability estimates for snorkeling surveys were higher for larger non-benthic species whereas electrofishing was more effective at detecting smaller benthic species. The number of sites and sampling occasions required to accurately estimate occupancy varied among fish species. For rare benthic species, our results suggested that higher number of occasions, and especially the addition of electrofishing, may be required to improve detection probabilities and obtain accurate occupancy estimates. Community models suggested that richness was 41% higher than the number of species actually observed and the addition of an electrofishing survey increased estimated richness by 13%. These results can be useful to future fish assemblage monitoring efforts by informing sampling designs, such as site selection (e.g. stratifying based on patch size) and determining effort required (e.g. number of sites versus occasions).« less

  3. Random vs. systematic sampling from administrative databases involving human subjects.

    PubMed

    Hagino, C; Lo, R J

    1998-09-01

    Two sampling techniques, simple random sampling (SRS) and systematic sampling (SS), were compared to determine whether they yield similar and accurate distributions for the following four factors: age, gender, geographic location and years in practice. Any point estimate within 7 yr or 7 percentage points of its reference standard (SRS or the entire data set, i.e., the target population) was considered "acceptably similar" to the reference standard. The sampling frame was from the entire membership database of the Canadian Chiropractic Association. The two sampling methods were tested using eight different sample sizes of n (50, 100, 150, 200, 250, 300, 500, 800). From the profile/characteristics, summaries of four known factors [gender, average age, number (%) of chiropractors in each province and years in practice], between- and within-methods chi 2 tests and unpaired t tests were performed to determine whether any of the differences [descriptively greater than 7% or 7 yr] were also statistically significant. The strengths of the agreements between the provincial distributions were quantified by calculating the percent agreements for each (provincial pairwise-comparison methods). Any percent agreement less than 70% was judged to be unacceptable. Our assessments of the two sampling methods (SRS and SS) for the different sample sizes tested suggest that SRS and SS yielded acceptably similar results. Both methods started to yield "correct" sample profiles at approximately the same sample size (n > 200). SS is not only convenient, it can be recommended for sampling from large databases in which the data are listed without any inherent order biases other than alphabetical listing by surname.

  4. The effect of salt crust on the thermal conductivity of one sample of fluvial particulate materials under Martian atmospheric pressures

    NASA Astrophysics Data System (ADS)

    Presley, Marsha A.; Craddock, Robert A.; Zolotova, Natalya

    2009-11-01

    A line-heat source apparatus was used to measure thermal conductivities of a lightly cemented fluvial sediment (salinity = 1.1 g · kg-1), and the same sample with the cement bonds almost completely disrupted, under low pressure, carbon dioxide atmospheres. The thermal conductivities of the cemented sample were approximately 3× higher, over the range of atmospheric pressures tested, than the thermal conductivities of the same sample after the cement bonds were broken. A thermal conductivity-derived particle size was determined for each sample by comparing these thermal conductivity measurements to previous data that demonstrated the dependence of thermal conductivity on particle size. Actual particle-size distributions were determined via physical separation through brass sieves. When uncemented, 87% of the particles were less than 125 μm in diameter, with 60% of the sample being less than 63 μm in diameter. As much as 35% of the cemented sample was composed of conglomerate particles with diameters greater than 500 μm. The thermal conductivities of the cemented sample were most similar to those of 500-μm glass beads, whereas the thermal conductivities of the uncemented sample were most similar to those of 75-μm glass beads. This study demonstrates that even a small amount of salt cement can significantly increase the thermal conductivity of particulate materials, as predicted by thermal modeling estimates by previous investigators.

  5. Distribution of the concentration of heavy metals associated with the sediment particles accumulated on road surfaces.

    PubMed

    Zafra, C A; Temprano, J; Tejero, I

    2011-07-01

    The heavy metal pollution caused by road run-off water constitutes a problem in urban areas. The metallic load associated with road sediment must be determined in order to study its impact in drainage systems and receiving waters, and to perfect the design of prevention systems. This paper presents data regarding the sediment collected on road surfaces in the city of Torrelavega (northern Spain) during a period of 65 days (132 samples). Two sample types were collected: vacuum-dried samples and those swept up following vacuuming. The sediment loading (g m(-2)), particle size distribution (63-2800 microm) and heavy metal concentrations were determined. The data showed that the concentration of heavy metals tends to increase with the reduction in the particle diameter (exponential tendency). The concentrations ofPb, Zn, Cu, Cr, Ni, Cd, Fe, Mn and Co in the size fraction <63 microm were 350, 630, 124, 57, 56, 38, 3231, 374 and 51 mg kg(-1), respectively (average traffic density: 3800 vehicles day(-1)). By increasing the residence time of the sediment, the concentration increases, whereas the ratio of the concentration between the different size fractions decreases. The concentration across the road diminishes when the distance between the roadway and the sampling siteincreases; when the distance increases, the ratio between size fractions for heavy metal concentrations increases. Finally, the main sources of heavy metals are the particles detached by braking (brake pads) and tyre wear (rubber), and are associated with particle sizes <125 microm.

  6. Effects of sample size on KERNEL home range estimates

    USGS Publications Warehouse

    Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.

    1999-01-01

    Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.

  7. Annual design-based estimation for the annualized inventories of forest inventory and analysis: sample size determination

    Treesearch

    Hans T. Schreuder; Jin-Mann S. Lin; John Teply

    2000-01-01

    The Forest Inventory and Analysis units in the USDA Forest Service have been mandated by Congress to go to an annualized inventory where a certain percentage of plots, say 20 percent, will be measured in each State each year. Although this will result in an annual sample size that will be too small for reliable inference for many areas, it is a sufficiently large...

  8. A predictive approach to selecting the size of a clinical trial, based on subjective clinical opinion.

    PubMed

    Spiegelhalter, D J; Freedman, L S

    1986-01-01

    The 'textbook' approach to determining sample size in a clinical trial has some fundamental weaknesses which we discuss. We describe a new predictive method which takes account of prior clinical opinion about the treatment difference. The method adopts the point of clinical equivalence (determined by interviewing the clinical participants) as the null hypothesis. Decision rules at the end of the study are based on whether the interval estimate of the treatment difference (classical or Bayesian) includes the null hypothesis. The prior distribution is used to predict the probabilities of making the decisions to use one or other treatment or to reserve final judgement. It is recommended that sample size be chosen to control the predicted probability of the last of these decisions. An example is given from a multi-centre trial of superficial bladder cancer.

  9. Cluster randomised crossover trials with binary data and unbalanced cluster sizes: application to studies of near-universal interventions in intensive care.

    PubMed

    Forbes, Andrew B; Akram, Muhammad; Pilcher, David; Cooper, Jamie; Bellomo, Rinaldo

    2015-02-01

    Cluster randomised crossover trials have been utilised in recent years in the health and social sciences. Methods for analysis have been proposed; however, for binary outcomes, these have received little assessment of their appropriateness. In addition, methods for determination of sample size are currently limited to balanced cluster sizes both between clusters and between periods within clusters. This article aims to extend this work to unbalanced situations and to evaluate the properties of a variety of methods for analysis of binary data, with a particular focus on the setting of potential trials of near-universal interventions in intensive care to reduce in-hospital mortality. We derive a formula for sample size estimation for unbalanced cluster sizes, and apply it to the intensive care setting to demonstrate the utility of the cluster crossover design. We conduct a numerical simulation of the design in the intensive care setting and for more general configurations, and we assess the performance of three cluster summary estimators and an individual-data estimator based on binomial-identity-link regression. For settings similar to the intensive care scenario involving large cluster sizes and small intra-cluster correlations, the sample size formulae developed and analysis methods investigated are found to be appropriate, with the unweighted cluster summary method performing well relative to the more optimal but more complex inverse-variance weighted method. More generally, we find that the unweighted and cluster-size-weighted summary methods perform well, with the relative efficiency of each largely determined systematically from the study design parameters. Performance of individual-data regression is adequate with small cluster sizes but becomes inefficient for large, unbalanced cluster sizes. When outcome prevalences are 6% or less and the within-cluster-within-period correlation is 0.05 or larger, all methods display sub-nominal confidence interval coverage, with the less prevalent the outcome the worse the coverage. As with all simulation studies, conclusions are limited to the configurations studied. We confined attention to detecting intervention effects on an absolute risk scale using marginal models and did not explore properties of binary random effects models. Cluster crossover designs with binary outcomes can be analysed using simple cluster summary methods, and sample size in unbalanced cluster size settings can be determined using relatively straightforward formulae. However, caution needs to be applied in situations with low prevalence outcomes and moderate to high intra-cluster correlations. © The Author(s) 2014.

  10. Blanks: a computer program for analyzing furniture rough-part needs in standard-size blanks

    Treesearch

    Philip A. Araman

    1983-01-01

    A computer program is described that allows a company to determine the number of edge-glued, standard-size blanks required to satisfy its rough-part needs for a given production period. Yield and cost information also is determined by the program. A list of the program inputs, outputs, and uses of outputs is described, and an example analysis with sample output is...

  11. Determination of the influence of dispersion pattern of pesticide-resistant individuals on the reliability of resistance estimates using different sampling plans.

    PubMed

    Shah, R; Worner, S P; Chapman, R B

    2012-10-01

    Pesticide resistance monitoring includes resistance detection and subsequent documentation/ measurement. Resistance detection would require at least one (≥1) resistant individual(s) to be present in a sample to initiate management strategies. Resistance documentation, on the other hand, would attempt to get an estimate of the entire population (≥90%) of the resistant individuals. A computer simulation model was used to compare the efficiency of simple random and systematic sampling plans to detect resistant individuals and to document their frequencies when the resistant individuals were randomly or patchily distributed. A patchy dispersion pattern of resistant individuals influenced the sampling efficiency of systematic sampling plans while the efficiency of random sampling was independent of such patchiness. When resistant individuals were randomly distributed, sample sizes required to detect at least one resistant individual (resistance detection) with a probability of 0.95 were 300 (1%) and 50 (10% and 20%); whereas, when resistant individuals were patchily distributed, using systematic sampling, sample sizes required for such detection were 6000 (1%), 600 (10%) and 300 (20%). Sample sizes of 900 and 400 would be required to detect ≥90% of resistant individuals (resistance documentation) with a probability of 0.95 when resistant individuals were randomly dispersed and present at a frequency of 10% and 20%, respectively; whereas, when resistant individuals were patchily distributed, using systematic sampling, a sample size of 3000 and 1500, respectively, was necessary. Small sample sizes either underestimated or overestimated the resistance frequency. A simple random sampling plan is, therefore, recommended for insecticide resistance detection and subsequent documentation.

  12. Big assumptions for small samples in crop insurance

    Treesearch

    Ashley Elaine Hungerford; Barry Goodwin

    2014-01-01

    The purpose of this paper is to investigate the effects of crop insurance premiums being determined by small samples of yields that are spatially correlated. If spatial autocorrelation and small sample size are not properly accounted for in premium ratings, the premium rates may inaccurately reflect the risk of a loss.

  13. Big Data and Large Sample Size: A Cautionary Note on the Potential for Bias

    PubMed Central

    Chambers, David A.; Glasgow, Russell E.

    2014-01-01

    Abstract A number of commentaries have suggested that large studies are more reliable than smaller studies and there is a growing interest in the analysis of “big data” that integrates information from many thousands of persons and/or different data sources. We consider a variety of biases that are likely in the era of big data, including sampling error, measurement error, multiple comparisons errors, aggregation error, and errors associated with the systematic exclusion of information. Using examples from epidemiology, health services research, studies on determinants of health, and clinical trials, we conclude that it is necessary to exercise greater caution to be sure that big sample size does not lead to big inferential errors. Despite the advantages of big studies, large sample size can magnify the bias associated with error resulting from sampling or study design. Clin Trans Sci 2014; Volume #: 1–5 PMID:25043853

  14. Numerical study of ultra-low field nuclear magnetic resonance relaxometry utilizing a single axis magnetometer for signal detection.

    PubMed

    Vogel, Michael W; Vegh, Viktor; Reutens, David C

    2013-05-01

    This paper investigates optimal placement of a localized single-axis magnetometer for ultralow field (ULF) relaxometry in view of various sample shapes and sizes. The authors used finite element method for the numerical analysis to determine the sample magnetic field environment and evaluate the optimal location of the single-axis magnetometer. Given the different samples, the authors analysed the magnetic field distribution around the sample and determined the optimal orientation and possible positions of the sensor to maximize signal strength, that is, the power of the free induction decay. The authors demonstrate that a glass vial with flat bottom and 10 ml volume is the best structure to achieve the highest signal out of samples studied. This paper demonstrates the importance of taking into account the combined effects of sensor configuration and sample parameters for signal generation prior to designing and constructing ULF systems with a single-axis magnetometer. Through numerical simulations the authors were able to optimize structural parameters, such as sample shape and size, sensor orientation and location, to maximize the measured signal in ultralow field relaxometry.

  15. Effect of kernel size and mill type on protein, milling yield, and baking quality of hard red spring wheat

    USDA-ARS?s Scientific Manuscript database

    Optimization of flour yield and quality is important in the milling industry. The objective of this study was to determine the effect of kernel size and mill type on flour yield and end-use quality. A hard red spring wheat composite sample was segregated, based on kernel size, into large, medium, ...

  16. Alpha spectrometric characterization of process-related particle size distributions from active particle sampling at the Los Alamos National Laboratory uranium foundry

    NASA Astrophysics Data System (ADS)

    Plionis, A. A.; Peterson, D. S.; Tandon, L.; LaMont, S. P.

    2010-03-01

    Uranium particles within the respirable size range pose a significant hazard to the health and safety of workers. Significant differences in the deposition and incorporation patterns of aerosols within the respirable range can be identified and integrated into sophisticated health physics models. Data characterizing the uranium particle size distribution resulting from specific foundry-related processes are needed. Using personal air sampling cascade impactors, particles collected from several foundry processes were sorted by activity median aerodynamic diameter onto various Marple substrates. After an initial gravimetric assessment of each impactor stage, the substrates were analyzed by alpha spectrometry to determine the uranium content of each stage. Alpha spectrometry provides rapid non-distructive isotopic data that can distinguish process uranium from natural sources and the degree of uranium contribution to the total accumulated particle load. In addition, the particle size bins utilized by the impactors provide adequate resolution to determine if a process particle size distribution is: lognormal, bimodal, or trimodal. Data on process uranium particle size values and distributions facilitate the development of more sophisticated and accurate models for internal dosimetry, resulting in an improved understanding of foundry worker health and safety.

  17. Practical limitations of single particle ICP-MS in the determination of nanoparticle size distributions and dissolution: case of rare earth oxides.

    PubMed

    Fréchette-Viens, Laurie; Hadioui, Madjid; Wilkinson, Kevin J

    2017-01-15

    The applicability of single particle ICP-MS (SP-ICP-MS) for the analysis of nanoparticle size distributions and the determination of particle numbers was evaluated using the rare earth oxide, La 2 O 3 , as a model particle. The composition of the storage containers, as well as the ICP-MS sample introduction system were found to significantly impact SP-ICP-MS analysis. While La 2 O 3 nanoparticles (La 2 O 3 NP) did not appear to interact strongly with sample containers, adsorptive losses of La 3+ (over 24h) were substantial (>72%) for fluorinated ethylene propylene bottles as opposed to polypropylene (<10%). Furthermore, each part of the sample introduction system (nebulizers made of perfluoroalkoxy alkane (PFA) or glass, PFA capillary tubing, and polyvinyl chloride (PVC) peristaltic pump tubing) contributed to La 3+ adsorptive losses. On the other hand, the presence of natural organic matter in the nanoparticle suspensions led to a decreased adsorptive loss in both the sample containers and the introduction system, suggesting that SP-ICP-MS may nonetheless be appropriate for NP analysis in environmental matrices. Coupling of an ion-exchange resin to the SP-ICP-MS led to more accurate determinations of the La 2 O 3 NP size distributions. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Biofouling on buoyant marine plastics: An experimental study into the effect of size on surface longevity.

    PubMed

    Fazey, Francesca M C; Ryan, Peter G

    2016-03-01

    Recent estimates suggest that roughly 100 times more plastic litter enters the sea than is found floating at the sea surface, despite the buoyancy and durability of many plastic polymers. Biofouling by marine biota is one possible mechanism responsible for this discrepancy. Microplastics (<5 mm in diameter) are more scarce than larger size classes, which makes sense because fouling is a function of surface area whereas buoyancy is a function of volume; the smaller an object, the greater its relative surface area. We tested whether plastic items with high surface area to volume ratios sank more rapidly by submerging 15 different sizes of polyethylene samples in False Bay, South Africa, for 12 weeks to determine the time required for samples to sink. All samples became sufficiently fouled to sink within the study period, but small samples lost buoyancy much faster than larger ones. There was a direct relationship between sample volume (buoyancy) and the time to attain a 50% probability of sinking, which ranged from 17 to 66 days of exposure. Our results provide the first estimates of the longevity of different sizes of plastic debris at the ocean surface. Further research is required to determine how fouling rates differ on free floating debris in different regions and in different types of marine environments. Such estimates could be used to improve model predictions of the distribution and abundance of floating plastic debris globally. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Dependence of flux-flow critical frequencies and generalized bundle sizes on distance of fluxoid traversal and fluxoid length in foil samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, J.D.; Joiner, W.C.H.

    1979-10-01

    Flux-flow noise power spectra taken on Pb/sub 80/In/sub 20/ foils as a function of the orientation of the magnetic field with respect to the sample surfaces are used to study changes in frequencies and bundle sizes as distances of fluxoid traversal and fluxoid lengths change. The results obtained for the frequency dependence of the noise spectra are entirely consistent with our model for flux motion interrupted by pinning centers, provided one makes the reasonable assumption that the distance between pinning centers which a fluxoid may encounter scales inversely with the fluxoid length. The importance of pinning centers in determining themore » noise characteristics is also demonstrated by the way in which subpulse distributions and generalized bundle sizes are altered by changes in the metallurgical structure of the sample. In unannealed samples the dependence of bundle size on magnetic field orientation is controlled by a structural anisotropy, and we find a correlation between large bundle size and the absence of short subpulse times. Annealing removes this anisotropy, and we find a stronger angular variation of bundle size than would be expected using present simplified models.« less

  20. Modeling ultrasound propagation through material of increasing geometrical complexity.

    PubMed

    Odabaee, Maryam; Odabaee, Mostafa; Pelekanos, Matthew; Leinenga, Gerhard; Götz, Jürgen

    2018-06-01

    Ultrasound is increasingly being recognized as a neuromodulatory and therapeutic tool, inducing a broad range of bio-effects in the tissue of experimental animals and humans. To achieve these effects in a predictable manner in the human brain, the thick cancellous skull presents a problem, causing attenuation. In order to overcome this challenge, as a first step, the acoustic properties of a set of simple bone-modeling resin samples that displayed an increasing geometrical complexity (increasing step sizes) were analyzed. Using two Non-Destructive Testing (NDT) transducers, we found that Wiener deconvolution predicted the Ultrasound Acoustic Response (UAR) and attenuation caused by the samples. However, whereas the UAR of samples with step sizes larger than the wavelength could be accurately estimated, the prediction was not accurate when the sample had a smaller step size. Furthermore, a Finite Element Analysis (FEA) performed in ANSYS determined that the scattering and refraction of sound waves was significantly higher in complex samples with smaller step sizes compared to simple samples with a larger step size. Together, this reveals an interaction of frequency and geometrical complexity in predicting the UAR and attenuation. These findings could in future be applied to poro-visco-elastic materials that better model the human skull. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  1. Bayesian sample size calculations in phase II clinical trials using a mixture of informative priors.

    PubMed

    Gajewski, Byron J; Mayo, Matthew S

    2006-08-15

    A number of researchers have discussed phase II clinical trials from a Bayesian perspective. A recent article by Mayo and Gajewski focuses on sample size calculations, which they determine by specifying an informative prior distribution and then calculating a posterior probability that the true response will exceed a prespecified target. In this article, we extend these sample size calculations to include a mixture of informative prior distributions. The mixture comes from several sources of information. For example consider information from two (or more) clinicians. The first clinician is pessimistic about the drug and the second clinician is optimistic. We tabulate the results for sample size design using the fact that the simple mixture of Betas is a conjugate family for the Beta- Binomial model. We discuss the theoretical framework for these types of Bayesian designs and show that the Bayesian designs in this paper approximate this theoretical framework. Copyright 2006 John Wiley & Sons, Ltd.

  2. Measurements of Regolith Simulant Thermal Conductivity Under Asteroid and Mars Surface Conditions

    NASA Astrophysics Data System (ADS)

    Ryan, A. J.; Christensen, P. R.

    2017-12-01

    Laboratory measurements have been necessary to interpret thermal data of planetary surfaces for decades. We present a novel radiometric laboratory method to determine temperature-dependent thermal conductivity of complex regolith simulants under rough to high vacuum and across a wide range of temperatures. This method relies on radiometric temperature measurements instead of contact measurements, eliminating the need to disturb the sample with thermal probes. We intend to determine the conductivity of grains that are up to 2 cm in diameter and to parameterize the effects of angularity, sorting, layering, composition, and eventually cementation. We present the experimental data and model results for a suite of samples that were selected to isolate and address regolith physical parameters that affect bulk conductivity. Spherical glass beads of various sizes were used to measure the effect of size frequency distribution. Spherical beads of polypropylene and well-rounded quartz sand have respectively lower and higher solid phase thermal conductivities than the glass beads and thus provide the opportunity to test the sensitivity of bulk conductivity to differences in solid phase conductivity. Gas pressure in our asteroid experimental chambers is held at 10^-6 torr, which is sufficient to negate gas thermal conduction in even our coarsest of samples. On Mars, the atmospheric pressure is such that the mean free path of the gas molecules is comparable to the pore size for many regolith particulates. Thus, subtle variations in pore size and/or atmospheric pressure can produce large changes in bulk regolith conductivity. For each sample measured in our martian environmental chamber, we repeat thermal measurement runs at multiple pressures to observe this behavior. Finally, we present conductivity measurements of angular basaltic simulant that is physically analogous to sand and gravel that may be present on Bennu. This simulant was used for OSIRIS-REx TAGSAM Sample Return Arm engineering tests. We measure the original size frequency distribution as well as several sorted size fractions. These results will support the efforts of the OSIRIS-REx team in selecting a site on asteroid Bennu that is safe for the spacecraft and meets grain size requirements for sampling.

  3. Determination of grain-size characteristics from electromagnetic seabed mapping data: A NW Iberian shelf study

    NASA Astrophysics Data System (ADS)

    Baasch, Benjamin; Müller, Hendrik; von Dobeneck, Tilo; Oberle, Ferdinand K. J.

    2017-05-01

    The electric conductivity and magnetic susceptibility of sediments are fundamental parameters in environmental geophysics. Both can be derived from marine electromagnetic profiling, a novel, fast and non-invasive seafloor mapping technique. Here we present statistical evidence that electric conductivity and magnetic susceptibility can help to determine physical grain-size characteristics (size, sorting and mud content) of marine surficial sediments. Electromagnetic data acquired with the bottom-towed electromagnetic profiler MARUM NERIDIS III were analysed and compared with grain size data from 33 samples across the NW Iberian continental shelf. A negative correlation between mean grain size and conductivity (R=-0.79) as well as mean grain size and susceptibility (R=-0.78) was found. Simple and multiple linear regression analyses were carried out to predict mean grain size, mud content and the standard deviation of the grain-size distribution from conductivity and susceptibility. The comparison of both methods showed that multiple linear regression models predict the grain-size distribution characteristics better than the simple models. This exemplary study demonstrates that electromagnetic benthic profiling is capable to estimate mean grain size, sorting and mud content of marine surficial sediments at a very high significance level. Transfer functions can be calibrated using grains-size data from a few reference samples and extrapolated along shelf-wide survey lines. This study suggests that electromagnetic benthic profiling should play a larger role for coastal zone management, seafloor contamination and sediment provenance studies in worldwide continental shelf systems.

  4. COMPARATIVE TOXICITY OF SIZE FRACTIONATED AIRBORNE PARTICULATE MATTER OBTAINED FROM DIFFERENT CITIES IN THE USA

    EPA Science Inventory

    This paper is the result of a collaboration to assess effects of size fractionated PM from different locations on murine pulmonary inflammatory responses. In the course of this, they also determined the chemical makeup of each of the samples.

  5. Organizational Correlates of Management Training Interests.

    ERIC Educational Resources Information Center

    Tills, Marvin

    A study was made of a sample of Wisconsin manufacturing firms and a subsample of firms in different size categories to determine organizational correlates of management training interests. Correlations were sought between characteristics of firms (ownership, relationship to parent company, size of employment, market orientation, growth trends,…

  6. Predicting fractional bed load transport rates: Application of the Wilcock‐Crowe equations to a regulated gravel bed river

    USGS Publications Warehouse

    Gaeuman, David; Andrews, E.D.; Krause, Andreas; Smith, Wes

    2009-01-01

    Bed load samples from four locations in the Trinity River of northern California are analyzed to evaluate the performance of the Wilcock‐Crowe bed load transport equations for predicting fractional bed load transport rates. Bed surface particles become smaller and the fraction of sand on the bed increases with distance downstream from Lewiston Dam. The dimensionless reference shear stress for the mean bed particle size (τ*rm) is largest near the dam, but varies relatively little between the more downstream locations. The relation between τ*rm and the reference shear stresses for other size fractions is constant across all locations. Total bed load transport rates predicted with the Wilcock‐Crowe equations are within a factor of 2 of sampled transport rates for 68% of all samples. The Wilcock‐Crowe equations nonetheless consistently under‐predict the transport of particles larger than 128 mm, frequently by more than an order of magnitude. Accurate prediction of the transport rates of the largest particles is important for models in which the evolution of the surface grain size distribution determines subsequent bed load transport rates. Values of τ*rm estimated from bed load samples are up to 50% larger than those predicted with the Wilcock‐Crowe equations, and sampled bed load transport approximates equal mobility across a wider range of grain sizes than is implied by the equations. Modifications to the Wilcock‐Crowe equation for determining τ*rm and the hiding function used to scale τ*rm to other grain size fractions are proposed to achieve the best fit to observed bed load transport in the Trinity River.

  7. A Monte-Carlo simulation analysis for evaluating the severity distribution functions (SDFs) calibration methodology and determining the minimum sample-size requirements.

    PubMed

    Shirazi, Mohammadali; Reddy Geedipally, Srinivas; Lord, Dominique

    2017-01-01

    Severity distribution functions (SDFs) are used in highway safety to estimate the severity of crashes and conduct different types of safety evaluations and analyses. Developing a new SDF is a difficult task and demands significant time and resources. To simplify the process, the Highway Safety Manual (HSM) has started to document SDF models for different types of facilities. As such, SDF models have recently been introduced for freeway and ramps in HSM addendum. However, since these functions or models are fitted and validated using data from a few selected number of states, they are required to be calibrated to the local conditions when applied to a new jurisdiction. The HSM provides a methodology to calibrate the models through a scalar calibration factor. However, the proposed methodology to calibrate SDFs was never validated through research. Furthermore, there are no concrete guidelines to select a reliable sample size. Using extensive simulation, this paper documents an analysis that examined the bias between the 'true' and 'estimated' calibration factors. It was indicated that as the value of the true calibration factor deviates further away from '1', more bias is observed between the 'true' and 'estimated' calibration factors. In addition, simulation studies were performed to determine the calibration sample size for various conditions. It was found that, as the average of the coefficient of variation (CV) of the 'KAB' and 'C' crashes increases, the analyst needs to collect a larger sample size to calibrate SDF models. Taking this observation into account, sample-size guidelines are proposed based on the average CV of crash severities that are used for the calibration process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Biomass Compositional Analysis Laboratory Procedures | Bioenergy | NREL

    Science.gov Websites

    Compositional Analysis This procedure describes methods for sample drying and size reduction, obtaining samples methods used to determine the amount of solids or moisture present in a solid or slurry biomass sample as values? We have found that neutral detergent fiber (NDF) and acid detergent fiber (ADF) methods report

  9. Predicting stellar angular diameters from V, IC, H and K photometry

    NASA Astrophysics Data System (ADS)

    Adams, Arthur D.; Boyajian, Tabetha S.; von Braun, Kaspar

    2018-01-01

    Determining the physical properties of microlensing events depends on having accurate angular sizes of the source star. Using long baseline optical interferometry, we are able to measure the angular sizes of nearby stars with uncertainties ≤2 per cent. We present empirically derived relations of angular diameters which are calibrated using both a sample of dwarfs/subgiants and a sample of giant stars. These relations are functions of five colour indices in the visible and near-infrared, and have uncertainties of 1.8-6.5 per cent depending on the colour used. We find that a combined sample of both main-sequence and evolved stars of A-K spectral types is well fitted by a single relation for each colour considered. We find that in the colours considered, metallicity does not play a statistically significant role in predicting stellar size, leading to a means of predicting observed sizes of stars from colour alone.

  10. Noninvasive genetics provides insights into the population size and genetic diversity of an Amur tiger population in China.

    PubMed

    Wang, Dan; Hu, Yibo; Ma, Tianxiao; Nie, Yonggang; Xie, Yan; Wei, Fuwen

    2016-01-01

    Understanding population size and genetic diversity is critical for effective conservation of endangered species. The Amur tiger (Panthera tigris altaica) is the largest felid and a flagship species for wildlife conservation. Due to habitat loss and human activities, available habitat and population size are continuously shrinking. However, little is known about the true population size and genetic diversity of wild tiger populations in China. In this study, we collected 55 fecal samples and 1 hair sample to investigate the population size and genetic diversity of wild Amur tigers in Hunchun National Nature Reserve, Jilin Province, China. From the samples, we determined that 23 fecal samples and 1 hair sample were from 7 Amur tigers: 2 males, 4 females and 1 individual of unknown sex. Interestingly, 2 fecal samples that were presumed to be from tigers were from Amur leopards, highlighting the significant advantages of noninvasive genetics over traditional methods in studying rare and elusive animals. Analyses from this sample suggested that the genetic diversity of wild Amur tigers is much lower than that of Bengal tigers, consistent with previous findings. Furthermore, the genetic diversity of this Hunchun population in China was lower than that of the adjoining subpopulation in southwest Primorye Russia, likely due to sampling bias. Considering the small population size and relatively low genetic diversity, it is urgent to protect this endangered local subpopulation in China. © 2015 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  11. Anomalous small-angle scattering as a way to solve the Babinet principle problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boiko, M. E., E-mail: m.e.boiko@mail.ioffe.ru; Sharkov, M. D.; Boiko, A. M.

    2013-12-15

    X-ray absorption spectra (XAS) have been used to determine the absorption edges of atoms present in a sample under study. A series of small-angle X-ray scattering (SAXS) measurements using different monochromatic X-ray beams at different wavelengths near the absorption edges is performed to solve the Babinet principle problem. The sizes of clusters containing atoms determined by the method of XAS were defined in SAXS experiments. In contrast to differential X-ray porosimetry, anomalous SAXS makes it possible to determine sizes of clusters of different atomic compositions.

  12. Anomalous small-angle scattering as a way to solve the Babinet principle problem

    NASA Astrophysics Data System (ADS)

    Boiko, M. E.; Sharkov, M. D.; Boiko, A. M.; Bobyl, A. V.

    2013-12-01

    X-ray absorption spectra (XAS) have been used to determine the absorption edges of atoms present in a sample under study. A series of small-angle X-ray scattering (SAXS) measurements using different monochromatic X-ray beams at different wavelengths near the absorption edges is performed to solve the Babinet principle problem. The sizes of clusters containing atoms determined by the method of XAS were defined in SAXS experiments. In contrast to differential X-ray porosimetry, anomalous SAXS makes it possible to determine sizes of clusters of different atomic compositions.

  13. Sedimentology and geochemistry of mud volcanoes in the Anaximander Mountain Region from the Eastern Mediterranean Sea.

    PubMed

    Talas, Ezgi; Duman, Muhammet; Küçüksezgin, Filiz; Brennan, Michael L; Raineault, Nicole A

    2015-06-15

    Investigations carried out on surface sediments collected from the Anaximander mud volcanoes in the Eastern Mediterranean Sea to determine sedimentary and geochemical properties. The sediment grain size distribution and geochemical contents were determined by grain size analysis, organic carbon, carbonate contents and element analysis. The results of element contents were compared to background levels of Earth's crust. The factors that affect element distribution in sediments were calculated by the nine push core samples taken from the surface of mud volcanoes by the E/V Nautilus. The grain size of the samples varies from sand to sandy silt. Enrichment and Contamination factor analysis showed that these analyses can also be used to evaluate of deep sea environmental and source parameters. It is concluded that the biological and cold seep effects are the main drivers of surface sediment characteristics from the Anaximander mud volcanoes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. A Scanning Transmission Electron Microscopy Method for Determining Manganese Composition in Welding Fume as a Function of Primary Particle Size

    PubMed Central

    Richman, Julie D.; Livi, Kenneth J.T.; Geyh, Alison S.

    2011-01-01

    Increasing evidence suggests that the physicochemical properties of inhaled nanoparticles influence the resulting toxicokinetics and toxicodynamics. This report presents a method using scanning transmission electron microscopy (STEM) to measure the Mn content throughout the primary particle size distribution of welding fume particle samples collected on filters for application in exposure and health research. Dark field images were collected to assess the primary particle size distribution and energy-dispersive X-ray and electron energy loss spectroscopy were performed for measurement of Mn composition as a function of primary particle size. A manual method incorporating imaging software was used to measure the primary particle diameter and to select an integration region for compositional analysis within primary particles throughout the size range. To explore the variation in the developed metric, the method was applied to 10 gas metal arc welding (GMAW) fume particle samples of mild steel that were collected under a variety of conditions. The range of Mn composition by particle size was −0.10 to 0.19 %/nm, where a positive estimate indicates greater relative abundance of Mn increasing with primary particle size and a negative estimate conversely indicates decreasing Mn content with size. However, the estimate was only statistically significant (p<0.05) in half of the samples (n=5), which all had a positive estimate. In the remaining samples, no significant trend was measured. Our findings indicate that the method is reproducible and that differences in the abundance of Mn by primary particle size among welding fume samples can be detected. PMID:21625364

  15. A Scanning Transmission Electron Microscopy Method for Determining Manganese Composition in Welding Fume as a Function of Primary Particle Size.

    PubMed

    Richman, Julie D; Livi, Kenneth J T; Geyh, Alison S

    2011-06-01

    Increasing evidence suggests that the physicochemical properties of inhaled nanoparticles influence the resulting toxicokinetics and toxicodynamics. This report presents a method using scanning transmission electron microscopy (STEM) to measure the Mn content throughout the primary particle size distribution of welding fume particle samples collected on filters for application in exposure and health research. Dark field images were collected to assess the primary particle size distribution and energy-dispersive X-ray and electron energy loss spectroscopy were performed for measurement of Mn composition as a function of primary particle size. A manual method incorporating imaging software was used to measure the primary particle diameter and to select an integration region for compositional analysis within primary particles throughout the size range. To explore the variation in the developed metric, the method was applied to 10 gas metal arc welding (GMAW) fume particle samples of mild steel that were collected under a variety of conditions. The range of Mn composition by particle size was -0.10 to 0.19 %/nm, where a positive estimate indicates greater relative abundance of Mn increasing with primary particle size and a negative estimate conversely indicates decreasing Mn content with size. However, the estimate was only statistically significant (p<0.05) in half of the samples (n=5), which all had a positive estimate. In the remaining samples, no significant trend was measured. Our findings indicate that the method is reproducible and that differences in the abundance of Mn by primary particle size among welding fume samples can be detected.

  16. Size exclusion HPLC of proteins for evaluation of durum wheat quality

    USDA-ARS?s Scientific Manuscript database

    The present research aimed to assess size exclusion HPLC (SE-HPLC) in protein molecular weight distribution determination for quality evaluation of durum semolina. Semolina samples were milled from 13 durum genotypes grown at 7 locations in 2009 and 2010 in ND. Sodium dodecyl sulfate (SDS) buffer ...

  17. Determination of phytate in high molecular weight, charged organic matrices by two-dimensional size exclusion-ion chromatography

    USDA-ARS?s Scientific Manuscript database

    A two-dimensional chromatography method for analyzing anionic targets (specifically phytate) in complex matrices is described. Prior to quantification by anion exchange chromatography, the sample matrix was prepared by size exclusion chromatography, which removed the majority of matrix complexities....

  18. Radiographic analysis of vocal tract length and its relation to overall body size in two canid species.

    PubMed

    Plotsky, K; Rendall, D; Riede, T; Chase, K

    2013-09-01

    Body size is an important determinant of resource and mate competition in many species. Competition is often mediated by conspicuous vocal displays, which may help to intimidate rivals and attract mates by providing honest cues to signaler size. Fitch proposed that vocal tract resonances (or formants) should provide particularly good, or honest, acoustic cues to signaler size because they are determined by the length of the vocal tract, which in turn, is hypothesized to scale reliably with overall body size. There is some empirical support for this hypothesis, but to date, many of the effects have been either mixed for males compared with females, weaker than expected in one or the other sex, or complicated by sampling issues. In this paper, we undertake a direct test of Fitch's hypothesis in two canid species using large samples that control for age- and sex-related variation. The samples involved radiographic images of 120 Portuguese water dogs Canis lupus familiaris and 121 Russian silver foxes Vulpes vulpes . Direct measurements were made of vocal tract length from X-ray images and compared against independent measures of body size. In adults of both species, and within both sexes, overall vocal tract length was strongly and significantly correlated with body size. Effects were strongest for the oral component of the vocal tract. By contrast, the length of the pharyngeal component was not as consistently related to body size. These outcomes are some of the clearest evidence to date in support of Fitch's hypothesis. At the same time, they highlight the potential for elements of both honest and deceptive body signaling to occur simultaneously via differential acoustic cues provided by the oral versus pharyngeal components of the vocal tract.

  19. Radiographic analysis of vocal tract length and its relation to overall body size in two canid species

    PubMed Central

    Plotsky, K.; Rendall, D.; Riede, T.; Chase, K.

    2013-01-01

    Body size is an important determinant of resource and mate competition in many species. Competition is often mediated by conspicuous vocal displays, which may help to intimidate rivals and attract mates by providing honest cues to signaler size. Fitch proposed that vocal tract resonances (or formants) should provide particularly good, or honest, acoustic cues to signaler size because they are determined by the length of the vocal tract, which in turn, is hypothesized to scale reliably with overall body size. There is some empirical support for this hypothesis, but to date, many of the effects have been either mixed for males compared with females, weaker than expected in one or the other sex, or complicated by sampling issues. In this paper, we undertake a direct test of Fitch’s hypothesis in two canid species using large samples that control for age- and sex-related variation. The samples involved radiographic images of 120 Portuguese water dogs Canis lupus familiaris and 121 Russian silver foxes Vulpes vulpes. Direct measurements were made of vocal tract length from X-ray images and compared against independent measures of body size. In adults of both species, and within both sexes, overall vocal tract length was strongly and significantly correlated with body size. Effects were strongest for the oral component of the vocal tract. By contrast, the length of the pharyngeal component was not as consistently related to body size. These outcomes are some of the clearest evidence to date in support of Fitch’s hypothesis. At the same time, they highlight the potential for elements of both honest and deceptive body signaling to occur simultaneously via differential acoustic cues provided by the oral versus pharyngeal components of the vocal tract. PMID:24363497

  20. Design considerations for case series models with exposure onset measurement error.

    PubMed

    Mohammed, Sandra M; Dalrymple, Lorien S; Sentürk, Damla; Nguyen, Danh V

    2013-02-28

    The case series model allows for estimation of the relative incidence of events, such as cardiovascular events, within a pre-specified time window after an exposure, such as an infection. The method requires only cases (individuals with events) and controls for all fixed/time-invariant confounders. The measurement error case series model extends the original case series model to handle imperfect data, where the timing of an infection (exposure) is not known precisely. In this work, we propose a method for power/sample size determination for the measurement error case series model. Extensive simulation studies are used to assess the accuracy of the proposed sample size formulas. We also examine the magnitude of the relative loss of power due to exposure onset measurement error, compared with the ideal situation where the time of exposure is measured precisely. To facilitate the design of case series studies, we provide publicly available web-based tools for determining power/sample size for both the measurement error case series model as well as the standard case series model. Copyright © 2012 John Wiley & Sons, Ltd.

  1. FIELD SAMPLING OF RESIDUAL AVIATION GASOLINE IN SANDY SOIL

    EPA Science Inventory

    Two complimentary field sampling methods for the determination of residual aviation gasoline content in the contaminated capillary fringe of a fine, uniform, sandy soil were investigated. The first method featured filed extrusion of core barrels into pint size Mason jars, while ...

  2. Modeling grain size variations of aeolian gypsum deposits at White Sands, New Mexico, using AVIRIS imagery

    USGS Publications Warehouse

    Ghrefat, H.A.; Goodell, P.C.; Hubbard, B.E.; Langford, R.P.; Aldouri, R.E.

    2007-01-01

    Visible and Near-Infrared (VNIR) through Short Wavelength Infrared (SWIR) (0.4-2.5????m) AVIRIS data, along with laboratory spectral measurements and analyses of field samples, were used to characterize grain size variations in aeolian gypsum deposits across barchan-transverse, parabolic, and barchan dunes at White Sands, New Mexico, USA. All field samples contained a mineralogy of ?????100% gypsum. In order to document grain size variations at White Sands, surficial gypsum samples were collected along three Transects parallel to the prevailing downwind direction. Grain size analyses were carried out on the samples by sieving them into seven size fractions ranging from 45 to 621????m, which were subjected to spectral measurements. Absorption band depths of the size fractions were determined after applying an automated continuum-removal procedure to each spectrum. Then, the relationship between absorption band depth and gypsum size fraction was established using a linear regression. Three software processing steps were carried out to measure the grain size variations of gypsum in the Dune Area using AVIRIS data. AVIRIS mapping results, field work and laboratory analysis all show that the interdune areas have lower absorption band depth values and consist of finer grained gypsum deposits. In contrast, the dune crest areas have higher absorption band depth values and consist of coarser grained gypsum deposits. Based on laboratory estimates, a representative barchan-transverse dune (Transect 1) has a mean grain size of 1.16 ??{symbol} (449????m). The error bar results show that the error ranges from - 50 to + 50????m. Mean grain size for a representative parabolic dune (Transect 2) is 1.51 ??{symbol} (352????m), and 1.52 ??{symbol} (347????m) for a representative barchan dune (Transect 3). T-test results confirm that there are differences in the grain size distributions between barchan and parabolic dunes and between interdune and dune crest areas. The t-test results also show that there are no significant differences between modeled and laboratory-measured grain size values. Hyperspectral grain size modeling can help to determine dynamic processes shaping the formation of the dunes such as wind directions, and the relative strengths of winds through time. This has implications for studying such processes on other planetary landforms that have mineralogy with unique absorption bands in VNIR-SWIR hyperspectral data. ?? 2006 Elsevier B.V. All rights reserved.

  3. Droplet-size distribution and stability of commercial injectable lipid emulsions containing fish oil.

    PubMed

    Gallegos, Críspulo; Valencia, Concepción; Partal, Pedro; Franco, José M; Maglio, Omay; Abrahamsson, Malin; Brito-de la Fuente, Edmundo

    2012-08-01

    The droplet size of commercial fish oil-containing injectable lipid emulsions, including conformance to United States Pharmacopeia (USP) standards on fat-globule size, was investigated. A total of 18 batches of three multichamber parenteral products containing the emulsion SMOFlipid as a component were analyzed. Samples from multiple lots of the products were evaluated to determine compliance with standards on the volume-weighted percentage of fat exceeding 0.05% (PFAT(5)) specified in USP chapter 729 to ensure the physical stability of i.v. lipid emulsions. The products were also analyzed to determine the effects of various storage times (3, 6, 9, and 12 months) and storage temperatures (25, 30, and 40 °C) on product stability. Larger-size lipid particles were quantified via single-particle optical sensing (SPOS). The emulsion's droplet-size distribution was determined via laser light scattering. SPOS and light-scattering analysis demonstrated mean PFAT(5) values well below USP-specified globule-size limits for all the tested products under all study conditions. In addition, emulsion aging at any storage temperature in the range studied did not result in a significant increase of PFAT(5) values, and mean droplet-size values did not change significantly during storage of up to 12 months at temperatures of 25-40 °C. PFAT(5) values were below the USP upper limits in SMOFlipid samples from multiple lots of three multichamber products after up to 12 months of storage at 25 or 30 °C or 6 months of storage at 40 °C.

  4. A critical evaluation of an asymmetrical flow field-flow fractionation system for colloidal size characterization of natural organic matter.

    PubMed

    Zhou, Zhengzhen; Guo, Laodong

    2015-06-19

    Colloidal retention characteristics, recovery and size distribution of model macromolecules and natural dissolved organic matter (DOM) were systematically examined using an asymmetrical flow field-flow fractionation (AFlFFF) system under various membrane size cutoffs and carrier solutions. Polystyrene sulfonate (PSS) standards with known molecular weights (MW) were used to determine their permeation and recovery rates by membranes with different nominal MW cutoffs (NMWCO) within the AFlFFF system. Based on a ≥90% recovery rate for PSS standards by the AFlFFF system, the actual NMWCOs were determined to be 1.9 kDa for the 0.3 kDa membrane, 2.7 kDa for the 1 kDa membrane, and 33 kDa for the 10 kDa membrane, respectively. After membrane calibration, natural DOM samples were analyzed with the AFlFFF system to determine their colloidal size distribution and the influence from membrane NMWCOs and carrier solutions. Size partitioning of DOM samples showed a predominant colloidal size fraction in the <5 nm or <10 kDa size range, consistent with the size characteristics of humic substances as the main terrestrial DOM component. Recovery of DOM by the AFlFFF system, as determined by UV-absorbance at 254 nm, decreased significantly with increasing membrane NMWCO, from 45% by the 0.3 kDa membrane to 2-3% by the 10 kDa membrane. Since natural DOM is mostly composed of lower MW substances (<10 kDa) and the actual membrane cutoffs are normally larger than their manufacturer ratings, a 0.3 kDa membrane (with an actual NMWCO of 1.9 kDa) is highly recommended for colloidal size characterization of natural DOM. Among the three carrier solutions, borate buffer seemed to provide the highest recovery and optimal separation of DOM. Rigorous calibration with macromolecular standards and optimization of system conditions are a prerequisite for quantifying colloidal size distribution using the flow field-flow fractionation technique. In addition, the coupling of AFlFFF with fluorescence EEMs could provide new insights into DOM heterogeneity in different colloidal size fractions. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains

    NASA Astrophysics Data System (ADS)

    Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.

    2013-12-01

    Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses with LAI and clip harvest data to determine whether LAI can be used as a suitable proxy for aboveground standing biomass. We also compared optimal sample sizes derived from LAI data, and clip-harvest data from two different size clip harvest areas (0.1m by 1m vs. 0.1m by 2m). Sample sizes were calculated in order to estimate the mean to within a standardized level of uncertainty that will be used to guide sampling effort across all vegetation types (i.e. estimated within × 10% with 95% confidence). Finally, we employed a Semivariogram approach to determine optimal sample size and spacing.

  6. Accounting for missing data in the estimation of contemporary genetic effective population size (N(e) ).

    PubMed

    Peel, D; Waples, R S; Macbeth, G M; Do, C; Ovenden, J R

    2013-03-01

    Theoretical models are often applied to population genetic data sets without fully considering the effect of missing data. Researchers can deal with missing data by removing individuals that have failed to yield genotypes and/or by removing loci that have failed to yield allelic determinations, but despite their best efforts, most data sets still contain some missing data. As a consequence, realized sample size differs among loci, and this poses a problem for unbiased methods that must explicitly account for random sampling error. One commonly used solution for the calculation of contemporary effective population size (N(e) ) is to calculate the effective sample size as an unweighted mean or harmonic mean across loci. This is not ideal because it fails to account for the fact that loci with different numbers of alleles have different information content. Here we consider this problem for genetic estimators of contemporary effective population size (N(e) ). To evaluate bias and precision of several statistical approaches for dealing with missing data, we simulated populations with known N(e) and various degrees of missing data. Across all scenarios, one method of correcting for missing data (fixed-inverse variance-weighted harmonic mean) consistently performed the best for both single-sample and two-sample (temporal) methods of estimating N(e) and outperformed some methods currently in widespread use. The approach adopted here may be a starting point to adjust other population genetics methods that include per-locus sample size components. © 2012 Blackwell Publishing Ltd.

  7. Structural study of some divalent aluminoborate glasses using ultrasonic and positron annihilation techniques

    NASA Astrophysics Data System (ADS)

    Saddeek, Yasser B.; Mohamed, Hamdy F. M.; Azooz, Moenis A.

    2004-07-01

    Positron annihilation lifetime (PAL), ultrasonic techniques, and differential thermal analysis (DTA) were performed to study the structure of some aluminoborate glasses. The basic compositions of these glasses are 50 B2O3 + 10 Al2O3 + 40 RO (wt%), where RO is the divalent oxide (MgO, CaO, SrO, and CdO). The ultrasonic data show that the rigidity increases from MgO to CaO then decrease at SrO and again increases at CdO. The glass transition temperature (determined from DTA) decreases from MgO to SrO then increases at CdO. The trend of the thermal properties was attributed to thermal stability. The experimental data are correlated with the internal glass structure and its connectivity. The PAL data show that an inversely correlation between the relative fractional of the open hole volume and the density of the samples. Also, there is a good correlation between the ortho-positronium (o-Ps) lifetime (open hole volume size) and the bulk modulus of the samples (determined from ultrasonic technique). The open volume hole size distribution for the samples shows that the open volume holes expand in size for CaO, SrO, MgO, and CdO, respectively with their distribution function moving to higher volume size.

  8. Synthesis of nanocrystalline zirconia by amorphous citrate route: structural and thermal (HTXRD) studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhagwat, Mahesh; Ramaswamy, Veda

    Nanocrystalline zirconia powder with a fairly narrow particle size distribution has been synthesized by the amorphous citrate route. The sample obtained has a high BET surface area of 89 m{sup 2} g{sup -1}. Rietveld refinement of the powder X-ray diffraction (XRD) profile of the zirconia sample confirms stabilization of zirconia in the tetragonal phase with around 8% monoclinic impurity. The data show the presence of both anionic as well as cationic vacancies in the lattice. Crystallite size determined from XRD is 8 nm and is in close agreement with the particle size determined by TEM. The in situ high temperature-X-raymore » diffraction (HTXRD) study revealed high thermal stability of the mixture till around 1023 K after which the transformation of tetragonal phase into the monoclinic phase has been seen as a function of temperature till 1473 K. This transformation is accompanied by an increase in the crystallite size of the sample from 8 to 55 nm. The thermal expansion coefficients are 9.14 x 10{sup -6} K{sup -1} along 'a'- and 15.8 x 10{sup -6} K{sup -1} along 'c'-axis. The lattice thermal expansion coefficient in the temperature range 298-1623 K is 34.6 x 10{sup -6} K{sup -1}.« less

  9. Water quality monitoring: A comparative case study of municipal and Curtin Sarawak's lake samples

    NASA Astrophysics Data System (ADS)

    Anand Kumar, A.; Jaison, J.; Prabakaran, K.; Nagarajan, R.; Chan, Y. S.

    2016-03-01

    In this study, particle size distribution and zeta potential of the suspended particles in municipal water and lake surface water of Curtin Sarawak's lake were compared and the samples were analysed using dynamic light scattering method. High concentration of suspended particles affects the water quality as well as suppresses the aquatic photosynthetic systems. A new approach has been carried out in the current work to determine the particle size distribution and zeta potential of the suspended particles present in the water samples. The results for the lake samples showed that the particle size ranges from 180nm to 1345nm and the zeta potential values ranges from -8.58 mV to -26.1 mV. High zeta potential value was observed in the surface water samples of Curtin Sarawak's lake compared to the municipal water. The zeta potential values represent that the suspended particles are stable and chances of agglomeration is lower in lake water samples. Moreover, the effects of physico-chemical parameters on zeta potential of the water samples were also discussed.

  10. Geometrical characterization of perlite-metal syntactic foam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borovinšek, Matej, E-mail: matej.borovinsek@um.si

    This paper introduces an improved method for the detailed geometrical characterization of perlite-metal syntactic foam. This novel metallic foam is created by infiltrating a packed bed of expanded perlite particles with liquid aluminium alloy. The geometry of the solidified metal is thus defined by the perlite particle shape, size and morphology. The method is based on a segmented micro-computed tomography data and allows for automated determination of the distributions of pore size, sphericity, orientation and location. The pore (i.e. particle) size distribution and pore orientation is determined by a multi-criteria k-nearest neighbour algorithm for pore identification. The results indicate amore » weak density gradient parallel to the casting direction and a slight preference of particle orientation perpendicular to the casting direction. - Highlights: •A new method for identification of pores in porous materials was developed. •It was applied on perlite-metal syntactic foam samples. •A porosity decrease in the axial direction of the samples was determined. •Pore shape analysis showed a high percentage of spherical pores. •Orientation analysis showed that more pores are oriented in the radial direction.« less

  11. Capturing heterogeneity: The role of a study area's extent for estimating mean throughfall

    NASA Astrophysics Data System (ADS)

    Zimmermann, Alexander; Voss, Sebastian; Metzger, Johanna Clara; Hildebrandt, Anke; Zimmermann, Beate

    2016-11-01

    The selection of an appropriate spatial extent of a sampling plot is one among several important decisions involved in planning a throughfall sampling scheme. In fact, the choice of the extent may determine whether or not a study can adequately characterize the hydrological fluxes of the studied ecosystem. Previous attempts to optimize throughfall sampling schemes focused on the selection of an appropriate sample size, support, and sampling design, while comparatively little attention has been given to the role of the extent. In this contribution, we investigated the influence of the extent on the representativeness of mean throughfall estimates for three forest ecosystems of varying stand structure. Our study is based on virtual sampling of simulated throughfall fields. We derived these fields from throughfall data sampled in a simply structured forest (young tropical forest) and two heterogeneous forests (old tropical forest, unmanaged mixed European beech forest). We then sampled the simulated throughfall fields with three common extents and various sample sizes for a range of events and for accumulated data. Our findings suggest that the size of the study area should be carefully adapted to the complexity of the system under study and to the required temporal resolution of the throughfall data (i.e. event-based versus accumulated). Generally, event-based sampling in complex structured forests (conditions that favor comparatively long autocorrelations in throughfall) requires the largest extents. For event-based sampling, the choice of an appropriate extent can be as important as using an adequate sample size.

  12. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

    PubMed Central

    Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.

    2014-01-01

    Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence. PMID:24694150

  13. Streamflow and suspended-sediment transport in Garvin Brook, Winona County, southeastern Minnesota: Hydrologic data for 1982

    USGS Publications Warehouse

    Payne, G.A.

    1983-01-01

    Streamflow and suspended-sediment-transport data were collected in Garvin Brook watershed in Winona County, southeastern Minnesota, during 1982. The data collection was part of a study to determine the effectiveness of agricultural best-management practices designed to improve rural water quality. The study is part of a Rural Clean Water Program demonstration project undertaken by the U.S. Department of Agriculture. Continuous streamflow data were collected at three gaging stations during March through September 1982. Suspended-sediment samples were collected at two of the gaging stations. Samples were collected manually at weekly intervals. During periods of rapidly changing stage, samples were collected at 30-minute to 12-hour intervals by stage-activated automatic samplers. The samples were analyzed for suspendedsediment concentration and particle-size distribution. Particlesize distributions were also determined for one set of bedmaterial samples collected at each sediment-sampling site. The streamflow and suspended-sediment-concentration data were used to compute records of mean-daily flow, mean-daily suspended-sediment concentration, and daily suspended-sediment discharge. The daily records are documented and results of analyses for particle-size distribution and of vertical sampling in the stream cross sections are given.

  14. Sample size determination for a three-arm equivalence trial of Poisson and negative binomial responses.

    PubMed

    Chang, Yu-Wei; Tsong, Yi; Zhao, Zhigen

    2017-01-01

    Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.

  15. Ciguatoxic Potential of Brown-Marbled Grouper in Relation to Fish Size and Geographical Origin

    PubMed Central

    Chan, Thomas Y. K.

    2015-01-01

    To determine the ciguatoxic potential of brown-marbled grouper (Epinephelus fuscoguttatus) in relation to fish size and geographical origin, this review systematically analyzed: 1) reports of large ciguatera outbreaks and outbreaks with description of the fish size; 2) Pacific ciguatoxin (P-CTX) profiles and levels and mouse bioassay results in fish samples from ciguatera incidents; 3) P-CTX profiles and levels and risk of toxicity in relation to fish size and origin; 4) regulatory measures restricting fish trade and fish size preference of the consumers. P-CTX levels in flesh and size dependency of toxicity indicate that the risk of ciguatera after eating E. fuscoguttatus varies with its geographical origin. For a large-sized grouper, it is necessary to establish legal size limits and control measures to protect public health and prevent overfishing. More risk assessment studies are required for E. fuscoguttatus to determine the size threshold above which the risk of ciguatera significantly increases. PMID:26324735

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    E.N. Stepanov; I.I. Mel'nikov; V.P. Gridasov

    In active production at OAO Magnitogorskii Metallurgicheskii Kombinat (MMK), samples of melt materials were taken during shutdown and during planned repairs at furnaces 1 and 8. In particular, coke was taken from the tuyere zone at different distances from the tuyere tip. The mass of the point samples was 2-15 kg, depending on the sampling zone. The material extracted from each zone underwent magnetic separation and screening by size class. The resulting coke sample was averaged out and divided into parts: one for determining the granulometric composition and mechanical strength; and the other for technical analysis and determination of themore » physicochemical properties of the coke.« less

  17. Optimisation of a sample preparation procedure for the screening of fungal infection and assessment of deoxynivalenol content in maize using mid-infrared attenuated total reflection spectroscopy.

    PubMed

    Kos, Gregor; Lohninger, Hans; Mizaikoff, Boris; Krska, Rudolf

    2007-07-01

    A sample preparation procedure for the determination of deoxynivalenol (DON) using attenuated total reflection mid-infrared spectroscopy is presented. Repeatable spectra were obtained from samples featuring a narrow particle size distribution. Samples were ground with a centrifugal mill and analysed with an analytical sieve shaker. Particle sizes of <100, 100-250, 250-500, 500-710 and 710-1000 microm were obtained. Repeatability, classification and quantification abilities for DON were compared with non-sieved samples. The 100-250 microm fraction showed the best repeatability. The relative standard deviation of spectral measurements improved from 20 to 4.4% and 100% of sieved samples were correctly classified compared with 79% of non-sieved samples. The DON level in analysed fractions was a good estimate of overall toxin content.

  18. A Circular-Impact Sampler for Forest Litter

    Treesearch

    Stephen S. Sackett

    1971-01-01

    Sampling the forest floor to determine litter weight is a tedious, time-consuming job. A new device has been designed and tested at the Southern Forest Fire Laboratory that eliminates many of the past sampling problems. The sampler has been fabricated in two sizes (6- and 12-inch diameters), and these are comparable in accuracy and sampling intensity. This Note...

  19. Decisions from Experience: Why Small Samples?

    ERIC Educational Resources Information Center

    Hertwig, Ralph; Pleskac, Timothy J.

    2010-01-01

    In many decisions we cannot consult explicit statistics telling us about the risks involved in our actions. In lieu of such data, we can arrive at an understanding of our dicey options by sampling from them. The size of the samples that we take determines, ceteris paribus, how good our choices will be. Studies of decisions from experience have…

  20. Breast cancer: determining the genetic profile from ultrasound-guided percutaneous biopsy specimens obtained during the diagnostic workups.

    PubMed

    López Ruiz, J A; Zabalza Estévez, I; Mieza Arana, J A

    2016-01-01

    To evaluate the possibility of determining the genetic profile of primary malignant tumors of the breast from specimens obtained by ultrasound-guided percutaneous biopsies during the diagnostic imaging workup. This is a retrospective study in 13 consecutive patients diagnosed with invasive breast cancer by B-mode ultrasound-guided 12 G core needle biopsy. After clinical indication, the pathologist decided whether the paraffin block specimens seemed suitable (on the basis of tumor size, validity of the sample, and percentage of tumor cells) before sending them for genetic analysis with the MammaPrint® platform. The size of the tumors on ultrasound ranged from 0.6cm to 5cm. In 11 patients the preserved specimen was considered valid and suitable for use in determining the genetic profile. In 1 patient (with a 1cm tumor) the pathologist decided that it was necessary to repeat the core biopsy to obtain additional samples. In 1 patient (with a 5cm tumor) the specimen was not considered valid by the genetic laboratory. The percentage of tumor cells in the samples ranged from 60% to 70%. In 11/13 cases (84.62%) it was possible to do the genetic analysis on the previously diagnosed samples. In most cases, regardless of tumor size, it is possible to obtain the genetic profile from tissue specimens obtained with ultrasound-guided 12 G core biopsy preserved in paraffin blocks. Copyright © 2015 SERAM. Published by Elsevier España, S.L.U. All rights reserved.

  1. Effects of crystallite size on the structure and magnetism of ferrihydrite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xiaoming; Zhu, Mengqiang; Koopal, Luuk K.

    2015-12-15

    The structure and magnetic properties of nano-sized (1.6 to 4.4 nm) ferrihydrite samples are systematically investigated through a combination of X-ray diffraction (XRD), X-ray pair distribution function (PDF), X-ray absorption spectroscopy (XAS) and magnetic analyses. The XRD, PDF and Fe K-edge XAS data of the ferrihydrite samples are all fitted well with the Michel ferrihydrite model, indicating similar local-, medium- and long-range ordered structures. PDF and XAS fitting results indicate that, with increasing crystallite size, the average coordination numbers of Fe–Fe and the unit cell parameter c increase, while Fe2 and Fe3 vacancies and the unit cell parameter a decrease.more » Mössbauer results indicate that the surface layer is relatively disordered, which might have been caused by the random distribution of Fe vacancies. These results support Hiemstra's surface-depletion model in terms of the location of disorder and the variations of Fe2 and Fe3 occupancies with size. Magnetic data indicate that the ferrihydrite samples show antiferromagnetism superimposed with a ferromagnetic-like moment at lower temperatures (100 K and 10 K), but ferrihydrite is paramagnetic at room temperature. In addition, both the magnetization and coercivity decrease with increasing ferrihydrite crystallite size due to strong surface effects in fine-grained ferrihydrites. Smaller ferrihydrite samples show less magnetic hyperfine splitting and a lower unblocking temperature (T B) than larger samples. The dependence of magnetic properties on grain size for nano-sized ferrihydrite provides a practical way to determine the crystallite size of ferrihydrite quantitatively in natural environments or artificial systems.« less

  2. Only pick the right grains: Modelling the bias due to subjective grain-size interval selection for chronometric and fingerprinting approaches.

    NASA Astrophysics Data System (ADS)

    Dietze, Michael; Fuchs, Margret; Kreutzer, Sebastian

    2016-04-01

    Many modern approaches of radiometric dating or geochemical fingerprinting rely on sampling sedimentary deposits. A key assumption of most concepts is that the extracted grain-size fraction of the sampled sediment adequately represents the actual process to be dated or the source area to be fingerprinted. However, these assumptions are not always well constrained. Rather, they have to align with arbitrary, method-determined size intervals, such as "coarse grain" or "fine grain" with partly even different definitions. Such arbitrary intervals violate principal process-based concepts of sediment transport and can thus introduce significant bias to the analysis outcome (i.e., a deviation of the measured from the true value). We present a flexible numerical framework (numOlum) for the statistical programming language R that allows quantifying the bias due to any given analysis size interval for different types of sediment deposits. This framework is applied to synthetic samples from the realms of luminescence dating and geochemical fingerprinting, i.e. a virtual reworked loess section. We show independent validation data from artificially dosed and subsequently mixed grain-size proportions and we present a statistical approach (end-member modelling analysis, EMMA) that allows accounting for the effect of measuring the compound dosimetric history or geochemical composition of a sample. EMMA separates polymodal grain-size distributions into the underlying transport process-related distributions and their contribution to each sample. These underlying distributions can then be used to adjust grain-size preparation intervals to minimise the incorporation of "undesired" grain-size fractions.

  3. Measurement of particle size distribution of soil and selected aggregate sizes using the hydrometer method and laser diffractometry

    NASA Astrophysics Data System (ADS)

    Guzmán, G.; Gómez, J. A.; Giráldez, J. V.

    2010-05-01

    Soil particle size distribution has been traditionally determined by the hydrometer or the sieve-pipette methods, both of them time consuming and requiring a relatively large soil sample. This might be a limitation in situations, such as for instance analysis of suspended sediment, when the sample is small. A possible alternative to these methods are the optical techniques such as laser diffractometry. However the literature indicates that the use of this technique as an alternative to traditional methods is still limited, because the difficulty in replicating the results obtained with the standard methods. In this study we present the percentages of soil grain size determined using laser diffractometry within ranges set between 0.04 - 2000 μm. A Beckman-Coulter ® LS-230 with a 750 nm laser beam and software version 3.2 in five soils, representative of southern Spain: Alameda, Benacazón, Conchuela, Lanjarón and Pedrera. In three of the studied soils (Alameda, Benacazón and Conchuela) the particle size distribution of each aggregate size class was also determined. Aggregate size classes were obtained by dry sieve analysis using a Retsch AS 200 basic ®. Two hundred grams of air dried soil were sieved during 150 s, at amplitude 2 mm, getting nine different sizes between 2000 μm and 10 μm. Analyses were performed by triplicate. The soil sample preparation was also adapted to our conditions. A small amount each soil sample (less than 1 g) was transferred to the fluid module full of running water and disaggregated by ultrasonication at energy level 4 and 80 ml of sodium hexametaphosphate solution during 580 seconds. Two replicates of each sample were performed. Each measurement was made for a 90 second reading at a pump speed of 62. After the laser diffractometry analysis, each soil and its aggregate classes were processed calibrating its own optical model fitting the optical parameters that mainly depends on the color and the shape of the analyzed particle. As a second alternative a unique optical model valid for a broad range of soils developed by the Department of Soil, Water, and Environmental Science of the University of Arizona (personal communication, already submitted) was tested. The results were compared with the particle size distribution measured in the same soils and aggregate classes using the hydrometer method. Preliminary results indicate a better calibration of the technique using the optical model of the Department of Soil, Water, and Environmental Science of the University of Arizona, which obtained a good correlations (r2>0.85). This result suggests that with an appropriate calibration of the optical model laser diffractometry might provide a reliable soil particle characterization.

  4. 137Cs as a tracer of recent sedimentary processes in Lake Michigan

    USGS Publications Warehouse

    Cahill, R.A.; Steele, J.D.

    1986-01-01

    To determine recent sediment movement, we measured the levels of 137Cs (an artificial radionuclide produced during nuclear weapons testing) of 118 southern Lake Michigan samples and 27 in Green Bay. These samples, taken from 286 grab samples of the upper 3 cm of sediment, were collected in 1975 as part of a systematic study of Lake Michigan sediment. 137Cs levels correlated well with concentrations of organic carbon, lead, and other anthropogenic trace metals in the sediment. 137Cs had a higher correlation with silt-sized than with clay-sized sediment (0.55 and 0.46, respectively). Atmospherically derived 137Cs and trace metals are being redistributed by sedimentary processes in Lake Michigan after being incorporated in suspended sediment. We determined a distribution pattern of 137Cs that represents areas of southern Lake Michigan where sediment deposition is occurring. ?? 1986 Dr W. Junk Publishers.

  5. Ostwald ripening of clays and metamorphic minerals

    USGS Publications Warehouse

    Eberl, D.D.; Srodon, J.; Kralik, M.; Taylor, B.E.; Peterman, Z.E.

    1990-01-01

    Analyses of particle size distributions indicate that clay minerals and other diagenetic and metamorphic minerals commonly undergo recrystallization by Ostwald ripening. The shapes of their particle size distributions can yield the rate law for this process. One consequence of Ostwald ripening is that a record of the recrystallization process is preserved in the various particle sizes. Therefore, one can determine the detailed geologic history of clays and other recrystallized minerals by separating, from a single sample, the various particle sizes for independent chemical, structural, and isotopic analyses.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frølich, S.; Leemreize, H.; Jakus, A.

    A model sample consisting of two different hydroxyapatite (hAp) powders was used as a bone phantom to investigate the extent to which X-ray diffraction tomography could map differences in hAp lattice constants and crystallite size. The diffraction data were collected at beamline 1-ID, the Advanced Photon Source, using monochromatic 65 keV X-radiation, a 25 × 25 µm pinhole beam and translation/rotation data collection. The diffraction pattern was reconstructed for each volume element (voxel) in the sample, and Rietveld refinement was used to determine the hAp lattice constants. The crystallite size for each voxel was also determined from the 00.2 hApmore » diffraction peak width. The results clearly show that differences between hAp powders could be measured with diffraction tomography.« less

  7. Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach

    ERIC Educational Resources Information Center

    Rotondi, Michael A.; Donner, Allan

    2009-01-01

    The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…

  8. Particle size distributions of metal and non-metal elements in an urban near-highway environment

    EPA Science Inventory

    Determination of the size-resolved elemental composition of near-highway particulate matter (PM) is important due to the health and environmental risks it poses. In the current study, twelve 24 h PM samples were collected (in July-August 2006) using a low-pressure impactor positi...

  9. 7 CFR 923.322 - Washington cherry handling regulation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... solids as determined from a composite sample by refractometer prior to packing, at time of packing, or at.../row size designation the row count/row size marked shall be one of those shown in Column 1 of the... corresponding diameter shown in Column 2 of such table: Provided, That the content of individual containers in...

  10. Characterization of the Particle Size and Polydispersity of Dicumarol Using Solid-State NMR Spectroscopy.

    PubMed

    Dempah, Kassibla Elodie; Lubach, Joseph W; Munson, Eric J

    2017-03-06

    A variety of particle sizes of a model compound, dicumarol, were prepared and characterized in order to investigate the correlation between particle size and solid-state NMR (SSNMR) proton spin-lattice relaxation ( 1 H T 1 ) times. Conventional laser diffraction and scanning electron microscopy were used as particle size measurement techniques and showed crystalline dicumarol samples with sizes ranging from tens of micrometers to a few micrometers. Dicumarol samples were prepared using both bottom-up and top-down particle size control approaches, via antisolvent microprecipitation and cryogrinding. It was observed that smaller particles of dicumarol generally had shorter 1 H T 1 times than larger ones. Additionally, cryomilled particles had the shortest 1 H T 1 times encountered (8 s). SSNMR 1 H T 1 times of all the samples were measured and showed as-received dicumarol to have a T 1 of 1500 s, whereas the 1 H T 1 times of the precipitated samples ranged from 20 to 80 s, with no apparent change in the physical form of dicumarol. Physical mixtures of different sized particles were also analyzed to determine the effect of sample inhomogeneity on 1 H T 1 values. Mixtures of cryoground and as-received dicumarol were clearly inhomogeneous as they did not fit well to a one-component relaxation model, but could be fit much better to a two-component model with both fast-and slow-relaxing regimes. Results indicate that samples of crystalline dicumarol containing two significantly different particle size populations could be deconvoluted solely based on their differences in 1 H T 1 times. Relative populations of each particle size regime could also be approximated using two-component fitting models. Using NMR theory on spin diffusion as a reference, and taking into account the presence of crystal defects, a model for the correlation between the particle size of dicumarol and its 1 H T 1 time was proposed.

  11. Particulate, colloidal, and dissolved-phase associations of plutonium and americium in a water sample from well 1587 at the Rocky Flats Plant, Colorado

    USGS Publications Warehouse

    Harnish, R.A.; McKnight, Diane M.; Ranville, James F.

    1994-01-01

    In November 1991, the initial phase of a study to determine the dominant aqueous phases that control the transport of plutonium (Pu), americium (Am), and uranium (U) in surface and groundwater at the Rocky Flats Plant was undertaken by the U.S. Geological Survey. By use of the techniques of stirred-cell spiral-flow filtration and crossflow ultrafiltration, particles of three size fractions were collected from a 60-liter sample of water from well 1587 at the Rocky Flats Plant. These samples and corresponding filtrate samples were analyzed for Pu and Am. As calculated from the analysis of filtrates, 65 percent of Pu 239 and 240 activity in the sample was associated with particulate and largest colloidal size fractions. Particulate (22 percent) and colloidal (43 percent) fractions were determined to have significant activities in relation to whole-water Pu activity. Am and Pu 238 activities were too low to be analyzed. Examination and analyses of the particulate and colloidal phases indicated the presence of mineral species (iron oxyhydroxides and clay minerals) and natural organic matter that can facilitate the transport of actinides in ground water. High concentrations of the transition metals copper and zinc in the smallest colloid fractions strongly indicate a potential for organic complexation of metals, and potentially of actinides, in this size fraction.

  12. A statistical approach to estimate the 3D size distribution of spheres from 2D size distributions

    USGS Publications Warehouse

    Kong, M.; Bhattacharya, R.N.; James, C.; Basu, A.

    2005-01-01

    Size distribution of rigidly embedded spheres in a groundmass is usually determined from measurements of the radii of the two-dimensional (2D) circular cross sections of the spheres in random flat planes of a sample, such as in thin sections or polished slabs. Several methods have been devised to find a simple factor to convert the mean of such 2D size distributions to the actual 3D mean size of the spheres without a consensus. We derive an entirely theoretical solution based on well-established probability laws and not constrained by limitations of absolute size, which indicates that the ratio of the means of measured 2D and estimated 3D grain size distribution should be r/4 (=.785). Actual 2D size distribution of the radii of submicron sized, pure Fe0 globules in lunar agglutinitic glass, determined from backscattered electron images, is tested to fit the gamma size distribution model better than the log-normal model. Numerical analysis of 2D size distributions of Fe0 globules in 9 lunar soils shows that the average mean of 2D/3D ratio is 0.84, which is very close to the theoretical value. These results converge with the ratio 0.8 that Hughes (1978) determined for millimeter-sized chondrules from empirical measurements. We recommend that a factor of 1.273 (reciprocal of 0.785) be used to convert the determined 2D mean size (radius or diameter) of a population of spheres to estimate their actual 3D size. ?? 2005 Geological Society of America.

  13. Characterization of hydrotreated Mayan and Wilmington vacuum tower bottoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pearson, C.D.; Green, J.B.; Bhan, O.K.

    1989-04-01

    Mayan and Wilmington vacuum tower bottoms were hydrotreated at various severity levels in a batch autoclave with and without catalyst. Each of the feeds and the hydrotreated products was separated into acid-base (ABN) fraction using a unique non-aqueous ion exchange technique. The feeds, hydrotreated whole products, and the ABN fractions were characterized by determining their elemental and metal content. Selected samples were analyzed by size exclusion chromatography/inductively coupled plasma technique to determine molecular size distribution of various species.

  14. Chefs' opinions of restaurant portion sizes.

    PubMed

    Condrasky, Marge; Ledikwe, Jenny H; Flood, Julie E; Rolls, Barbara J

    2007-08-01

    The objectives were to determine who establishes restaurant portion sizes and factors that influence these decisions, and to examine chefs' opinions regarding portion size, nutrition information, and weight management. A survey was distributed to chefs to obtain information about who is responsible for determining restaurant portion sizes, factors influencing restaurant portion sizes, what food portion sizes are being served in restaurants, and chefs' opinions regarding nutrition information, health, and body weight. The final sample consisted of 300 chefs attending various culinary meetings. Executive chefs were identified as being primarily responsible for establishing portion sizes served in restaurants. Factors reported to have a strong influence on restaurant portion sizes included presentation of foods, food cost, and customer expectations. While 76% of chefs thought that they served "regular" portions, the actual portions of steak and pasta they reported serving were 2 to 4 times larger than serving sizes recommended by the U.S government. Chefs indicated that they believe that the amount of food served influences how much patrons consume and that large portions are a problem for weight control, but their opinions were mixed regarding whether it is the customer's responsibility to eat an appropriate amount when served a large portion of food. Portion size is a key determinant of energy intake, and the results from this study suggest that cultural norms and economic value strongly influence the determination of restaurant portion sizes. Strategies are needed to encourage chefs to provide and promote portions that are appropriate for customers' energy requirements.

  15. Setting up equine embryo gender determination by preimplantation genetic diagnosis in a commercial embryo transfer program.

    PubMed

    Herrera, C; Morikawa, M I; Bello, M B; von Meyeren, M; Centeno, J Eusebio; Dufourq, P; Martinez, M M; Llorente, J

    2014-03-15

    Preimplantation genetic diagnosis (PGD) allows identifying genetic traits in early embryos. Because in some equine breeds, like Polo Argentino, females are preferred to males for competition, PGD can be used to determine the gender of the embryo before transfer and thus allow the production of only female pregnancies. This procedure could have a great impact on commercial embryo production programs. The present study was conducted to adapt gender selection by PGD to a large-scale equine embryo transfer program. To achieve this, we studied (i) the effect on pregnancy rates of holding biopsied embryos for 7 to 10 hours in holding medium at 32 °C before transfer, (ii) the effect on pregnancy rates of using embryos of different sizes for biopsy, and (iii) the efficiency of amplification by heating biopsies before polymerase chain reaction. Equine embryos were classified by size (≤300, 300-1000, and >1000 μm), biopsied, and transferred 1 to 2 or 7 to 10 hours after flushing. Some of the biopsy samples obtained were incubated for 10 minutes at 95 °C and the rest remained untreated. Pregnancy rates were recorded at 25 days of gestation; fetal gender was determined using ultrasonography and compared with PGD results. Holding biopsied embryos for 7 to 10 hours before transfer produced pregnancy rates similar to those for biopsied embryos transferred within 2 hours (63% and 57%, respectively). These results did not differ from pregnancy rates of nonbiopsied embryos undergoing the same holding times (50% for 7-10 hours and 63% for 1-2 hours). Pregnancy rates for biopsied and nonbiopsied embryos did not differ between size groups or between biopsied and nonbiopsied embryos within the same size group (P > 0.05). Incubating biopsy samples for 10 minutes at 95 °C before polymerase chain reaction significantly increased the diagnosis rate (78.5% vs. 45.5% for treated and nontreated biopsy samples respectively). Gender determination using incubated biopsy samples matched the results obtained using ultrasonography in all pregnancies assessed (11/11, 100%); untreated biopsy samples were correctly diagnosed in 36 of 41 assessed pregnancies (87.8%), although the difference between treated and untreated biopsy samples was not significant. Our results demonstrated that biopsied embryos can remain in holding medium before being transferred, until gender diagnosis by PGD is complete (7-10 hours), without affecting pregnancy rates. This simplifies the management of an embryo transfer program willing to incorporate PGD for gender selection, by transferring only embryos of the desired sex. Embryo biopsy can be performed in a clinical setting on embryos of different sizes, without affecting their viability. Additionally, we showed that pretreating biopsy samples with a short incubation at 95 °C improved the overall efficiency of embryo sex determination. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Probabilistic Design of a Mars Sample Return Earth Entry Vehicle Thermal Protection System

    NASA Technical Reports Server (NTRS)

    Dec, John A.; Mitcheltree, Robert A.

    2002-01-01

    The driving requirement for design of a Mars Sample Return mission is to assure containment of the returned samples. Designing to, and demonstrating compliance with, such a requirement requires physics based tools that establish the relationship between engineer's sizing margins and probabilities of failure. The traditional method of determining margins on ablative thermal protection systems, while conservative, provides little insight into the actual probability of an over-temperature during flight. The objective of this paper is to describe a new methodology for establishing margins on sizing the thermal protection system (TPS). Results of this Monte Carlo approach are compared with traditional methods.

  17. VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS

    PubMed Central

    Huang, Jian; Horowitz, Joel L.; Wei, Fengrong

    2010-01-01

    We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is “small” relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method. PMID:21127739

  18. 3D contour fluorescence spectroscopy with Brus model: Determination of size and band gap of double stranded DNA templated silver nanoclusters

    NASA Astrophysics Data System (ADS)

    Kamalraj, Devaraj; Yuvaraj, Selvaraj; Yoganand, Coimbatore Paramasivam; Jaffer, Syed S.

    2018-01-01

    Here, we propose a new synthetic methodology for silver nanocluster preparation by using a double stranded-DNA (ds-DNA) template which no one has reported yet. A new calculative method was formulated to determine the size of the nanocluster and their band gaps by using steady state 3D contour fluorescence technique with Brus model. Generally, the structure and size of the nanoclusters determine by using High Resolution Transmission Electron Microscopy (HR-TEM). Before imaging the samples by using HR-TEM, they are introduced to drying process which causes aggregation and forms bigger polycrystalline particles. It takes long time duration and expensive methodology. In this current methodology, we found out the size and band gap of the nanocluster in the liquid form without any polycrystalline aggregation for which 3D contour fluorescence technique was used as an alternative approach to the HR-TEM method.

  19. 40 CFR 53.40 - General provisions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 50 percent cutpoint of a test sampler shall be determined in a wind tunnel using 10 particle sizes... particle sampling effectiveness of a test sampler shall be determined in a wind tunnel using 25 µm... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) AMBIENT AIR...

  20. 40 CFR 53.40 - General provisions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 50 percent cutpoint of a test sampler shall be determined in a wind tunnel using 10 particle sizes... particle sampling effectiveness of a test sampler shall be determined in a wind tunnel using 25 µm... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) AMBIENT AIR...

  1. 40 CFR 53.40 - General provisions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 50 percent cutpoint of a test sampler shall be determined in a wind tunnel using 10 particle sizes... particle sampling effectiveness of a test sampler shall be determined in a wind tunnel using 25 µm... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) AMBIENT AIR...

  2. Bayesian assurance and sample size determination in the process validation life-cycle.

    PubMed

    Faya, Paul; Seaman, John W; Stamey, James D

    2017-01-01

    Validation of pharmaceutical manufacturing processes is a regulatory requirement and plays a key role in the assurance of drug quality, safety, and efficacy. The FDA guidance on process validation recommends a life-cycle approach which involves process design, qualification, and verification. The European Medicines Agency makes similar recommendations. The main purpose of process validation is to establish scientific evidence that a process is capable of consistently delivering a quality product. A major challenge faced by manufacturers is the determination of the number of batches to be used for the qualification stage. In this article, we present a Bayesian assurance and sample size determination approach where prior process knowledge and data are used to determine the number of batches. An example is presented in which potency uniformity data is evaluated using a process capability metric. By using the posterior predictive distribution, we simulate qualification data and make a decision on the number of batches required for a desired level of assurance.

  3. An analysis of Apollo lunar soil samples 12070,889, 12030,187, and 12070,891: Basaltic diversity at the Apollo 12 landing site and implications for classification of small-sized lunar samples

    NASA Astrophysics Data System (ADS)

    Alexander, Louise; Snape, Joshua F.; Joy, Katherine H.; Downes, Hilary; Crawford, Ian A.

    2016-09-01

    Lunar mare basalts provide insights into the compositional diversity of the Moon's interior. Basalt fragments from the lunar regolith can potentially sample lava flows from regions of the Moon not previously visited, thus, increasing our understanding of lunar geological evolution. As part of a study of basaltic diversity at the Apollo 12 landing site, detailed petrological and geochemical data are provided here for 13 basaltic chips. In addition to bulk chemistry, we have analyzed the major, minor, and trace element chemistry of mineral phases which highlight differences between basalt groups. Where samples contain olivine, the equilibrium parent melt magnesium number (Mg#; atomic Mg/[Mg + Fe]) can be calculated to estimate parent melt composition. Ilmenite and plagioclase chemistry can also determine differences between basalt groups. We conclude that samples of approximately 1-2 mm in size can be categorized provided that appropriate mineral phases (olivine, plagioclase, and ilmenite) are present. Where samples are fine-grained (grain size <0.3 mm), a "paired samples t-test" can provide a statistical comparison between a particular sample and known lunar basalts. Of the fragments analyzed here, three are found to belong to each of the previously identified olivine and ilmenite basalt suites, four to the pigeonite basalt suite, one is an olivine cumulate, and two could not be categorized because of their coarse grain sizes and lack of appropriate mineral phases. Our approach introduces methods that can be used to investigate small sample sizes (i.e., fines) from future sample return missions to investigate lava flow diversity and petrological significance.

  4. Methods for Determining Particle Size Distributions from Nuclear Detonations.

    DTIC Science & Technology

    1987-03-01

    Debris . . . 30 IV. Summary of Sample Preparation Method . . . . 35 V. Set Parameters for PCS ... ........... 39 VI. Analysis by Vendors...54 XV. Results From Brookhaven Analysis Using The Method of Cumulants ... ........... . 54 XVI. Results From Brookhaven Analysis of Sample...R-3 Using Histogram Method ......... .55 XVII. Results From Brookhaven Analysis of Sample R-8 Using Histogram Method ........... 56 XVIII.TEM Particle

  5. Sampling plantations to determine white-pine weevil injury

    Treesearch

    Robert L. Talerico; Robert W., Jr. Wilson

    1973-01-01

    Use of 1/10-acre square plots to obtain estimates of the proportion of never-weeviled trees necessary for evaluating and scheduling white-pine weevil control is described. The optimum number of trees to observe per plot is estimated from data obtained from sample plantations in the Northeast and a table is given. Of sample size required to achieve a standard error of...

  6. Measuring solids concentration in stormwater runoff: comparison of analytical methods.

    PubMed

    Clark, Shirley E; Siu, Christina Y S

    2008-01-15

    Stormwater suspended solids typically are quantified using one of two methods: aliquot/subsample analysis (total suspended solids [TSS]) or whole-sample analysis (suspended solids concentration [SSC]). Interproject comparisons are difficult because of inconsistencies in the methods and in their application. To address this concern, the suspended solids content has been measured using both methodologies in many current projects, but the question remains about how to compare these values with historical water-quality data where the analytical methodology is unknown. This research was undertaken to determine the effect of analytical methodology on the relationship between these two methods of determination of the suspended solids concentration, including the effect of aliquot selection/collection method and of particle size distribution (PSD). The results showed that SSC was best able to represent the known sample concentration and that the results were independent of the sample's PSD. Correlations between the results and the known sample concentration could be established for TSS samples, but they were highly dependent on the sample's PSD and on the aliquot collection technique. These results emphasize the need to report not only the analytical method but also the particle size information on the solids in stormwater runoff.

  7. Discriminant Analysis of Defective and Non-Defective Field Pea (Pisum sativum L.) into Broad Market Grades Based on Digital Image Features.

    PubMed

    McDonald, Linda S; Panozzo, Joseph F; Salisbury, Phillip A; Ford, Rebecca

    2016-01-01

    Field peas (Pisum sativum L.) are generally traded based on seed appearance, which subjectively defines broad market-grades. In this study, we developed an objective Linear Discriminant Analysis (LDA) model to classify market grades of field peas based on seed colour, shape and size traits extracted from digital images. Seeds were imaged in a high-throughput system consisting of a camera and laser positioned over a conveyor belt. Six colour intensity digital images were captured (under 405, 470, 530, 590, 660 and 850nm light) for each seed, and surface height was measured at each pixel by laser. Colour, shape and size traits were compiled across all seed in each sample to determine the median trait values. Defective and non-defective seed samples were used to calibrate and validate the model. Colour components were sufficient to correctly classify all non-defective seed samples into correct market grades. Defective samples required a combination of colour, shape and size traits to achieve 87% and 77% accuracy in market grade classification of calibration and validation sample-sets respectively. Following these results, we used the same colour, shape and size traits to develop an LDA model which correctly classified over 97% of all validation samples as defective or non-defective.

  8. Discriminant Analysis of Defective and Non-Defective Field Pea (Pisum sativum L.) into Broad Market Grades Based on Digital Image Features

    PubMed Central

    McDonald, Linda S.; Panozzo, Joseph F.; Salisbury, Phillip A.; Ford, Rebecca

    2016-01-01

    Field peas (Pisum sativum L.) are generally traded based on seed appearance, which subjectively defines broad market-grades. In this study, we developed an objective Linear Discriminant Analysis (LDA) model to classify market grades of field peas based on seed colour, shape and size traits extracted from digital images. Seeds were imaged in a high-throughput system consisting of a camera and laser positioned over a conveyor belt. Six colour intensity digital images were captured (under 405, 470, 530, 590, 660 and 850nm light) for each seed, and surface height was measured at each pixel by laser. Colour, shape and size traits were compiled across all seed in each sample to determine the median trait values. Defective and non-defective seed samples were used to calibrate and validate the model. Colour components were sufficient to correctly classify all non-defective seed samples into correct market grades. Defective samples required a combination of colour, shape and size traits to achieve 87% and 77% accuracy in market grade classification of calibration and validation sample-sets respectively. Following these results, we used the same colour, shape and size traits to develop an LDA model which correctly classified over 97% of all validation samples as defective or non-defective. PMID:27176469

  9. Sampling design and required sample size for evaluating contamination levels of 137Cs in Japanese fir needles in a mixed deciduous forest stand in Fukushima, Japan.

    PubMed

    Oba, Yurika; Yamada, Toshihiro

    2017-05-01

    We estimated the sample size (the number of samples) required to evaluate the concentration of radiocesium ( 137 Cs) in Japanese fir (Abies firma Sieb. & Zucc.), 5 years after the outbreak of the Fukushima Daiichi Nuclear Power Plant accident. We investigated the spatial structure of the contamination levels in this species growing in a mixed deciduous broadleaf and evergreen coniferous forest stand. We sampled 40 saplings with a tree height of 150 cm-250 cm in a Fukushima forest community. The results showed that: (1) there was no correlation between the 137 Cs concentration in needles and soil, and (2) the difference in the spatial distribution pattern of 137 Cs concentration between needles and soil suggest that the contribution of root uptake to 137 Cs in new needles of this species may be minor in the 5 years after the radionuclides were released into the atmosphere. The concentration of 137 Cs in needles showed a strong positive spatial autocorrelation in the distance class from 0 to 2.5 m, suggesting that the statistical analysis of data should consider spatial autocorrelation in the case of an assessment of the radioactive contamination of forest trees. According to our sample size analysis, a sample size of seven trees was required to determine the mean contamination level within an error in the means of no more than 10%. This required sample size may be feasible for most sites. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. 77 FR 26292 - Risk Evaluation and Mitigation Strategy Assessments: Social Science Methodologies to Assess Goals...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-03

    ... determine endpoints; questionnaire design and analyses; and presentation of survey results. To date, FDA has..., the workshop will invest considerable time in identifying best methodological practices for conducting... sample, sample size, question design, process, and endpoints. Panel 2 will focus on alternatives to...

  11. "Adultspan" Publication Patterns: Author and Article Characteristics from 1999 to 2009

    ERIC Educational Resources Information Center

    Erford, Bradley T.; Clark, Kelly H.; Erford, Breann M.

    2011-01-01

    Publication patterns of articles in "Adultspan" from 1999 to 2009 were reviewed. Author characteristics and article content were analyzed to determine trends over time. Research articles were analyzed specifically for type of research design, classification, sampling method, types of participants, sample size, types of statistics used, and…

  12. 40 CFR 53.40 - General provisions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 50 percent cutpoint of a test sampler shall be determined in a wind tunnel using 10 particle sizes and three wind speeds as specified in table D-2. A minimum of 3 replicate measurements of sampling... sampling effectiveness (percent) versus aerodynamic particle diameter (µm) for each of the three wind...

  13. 40 CFR 53.40 - General provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 50 percent cutpoint of a test sampler shall be determined in a wind tunnel using 10 particle sizes and three wind speeds as specified in table D-2. A minimum of 3 replicate measurements of sampling... sampling effectiveness (percent) versus aerodynamic particle diameter (µm) for each of the three wind...

  14. Comparative fiber evaluation of the mesdan aqualab microwave moisture measurement instrument

    USDA-ARS?s Scientific Manuscript database

    Moisture is a key cotton fiber parameter, as it can impact the fiber quality and the processing of cotton fiber. The Mesdan Aqualab is a microwave-based fiber moisture measurement instrument for samples with moderate sample size. A program was implemented to determine the capabilities of the Aqual...

  15. Investigation of element distributions in Luna-16 regolith

    NASA Astrophysics Data System (ADS)

    Kuznetsov, R. A.; Lure, B. G.; Minevich, V. Ia.; Stiuf, V. I.; Pankratov, V. B.

    1981-03-01

    The concentrations of 32 elements in fractions of different grain sizes in the samples of the lunar regolith brought back by Luna-16 are determined by means of neutron activation analysis. Four groups of elements are distinguished on the basis of the variations of their concentration with grain size, and concentration variations of the various elements with sample depth are also noted. Chemical leaching of the samples combined with neutron activation also reveals differences in element concentrations in the water soluble, metallic, sulphide, phosphate, rare mineral and rock phases of the samples. In particular, the rare earth elements are observed to be depleted in the regolith with respect to chondritic values, and to be concentrated in the phase extracted with 14 M HNO3.

  16. Simultaneous Determination of Size and Quantification of Gold Nanoparticles by Direct Coupling Thin layer Chromatography with Catalyzed Luminol Chemiluminescence

    PubMed Central

    Yan, Neng; Zhu, Zhenli; He, Dong; Jin, Lanlan; Zheng, Hongtao; Hu, Shenghong

    2016-01-01

    The increasing use of metal-based nanoparticle products has raised concerns in particular for the aquatic environment and thus the quantification of such nanomaterials released from products should be determined to assess their environmental risks. In this study, a simple, rapid and sensitive method for the determination of size and mass concentration of gold nanoparticles (AuNPs) in aqueous suspension was established by direct coupling of thin layer chromatography (TLC) with catalyzed luminol-H2O2 chemiluminescence (CL) detection. For this purpose, a moving stage was constructed to scan the chemiluminescence signal from TLC separated AuNPs. The proposed TLC-CL method allows the quantification of differently sized AuNPs (13 nm, 41 nm and 100 nm) contained in a mixture. Various experimental parameters affecting the characterization of AuNPs, such as the concentration of H2O2, the concentration and pH of the luminol solution, and the size of the spectrometer aperture were investigated. Under optimal conditions, the detection limits for AuNP size fractions of 13 nm, 41 nm and 100 nm were 38.4 μg L−1, 35.9 μg L−1 and 39.6 μg L−1, with repeatabilities (RSD, n = 7) of 7.3%, 6.9% and 8.1% respectively for 10 mg L−1 samples. The proposed method was successfully applied to the characterization of AuNP size and concentration in aqueous test samples. PMID:27080702

  17. Influence of size-fractioning techniques on concentrations of selected trace metals in bottom materials from two streams in northeastern Ohio

    USGS Publications Warehouse

    Koltun, G.F.; Helsel, Dennis R.

    1986-01-01

    Identical stream-bottom material samples, when fractioned to the same size by different techniques, may contain significantly different trace-metal concentrations. Precision of techniques also may differ, which could affect the ability to discriminate between size-fractioned bottom-material samples having different metal concentrations. Bottom-material samples fractioned to less than 0.020 millimeters by means of three common techniques (air elutriation, sieving, and settling) were analyzed for six trace metals to determine whether the technique used to obtain the desired particle-size fraction affects the ability to discriminate between bottom materials having different trace-metal concentrations. In addition, this study attempts to assess whether median trace-metal concentrations in size-fractioned bottom materials of identical origin differ depending on the size-fractioning technique used. Finally, this study evaluates the efficiency of the three size-fractioning techniques in terms of time, expense, and effort involved. Bottom-material samples were collected at two sites in northeastern Ohio: One is located in an undeveloped forested basin, and the other is located in a basin having a mixture of industrial and surface-mining land uses. The sites were selected for their close physical proximity, similar contributing drainage areas, and the likelihood that trace-metal concentrations in the bottom materials would be significantly different. Statistically significant differences in the concentrations of trace metals were detected between bottom-material samples collected at the two sites when the samples had been size-fractioned by means of air elutriation or sieving. Statistical analyses of samples that had been size fractioned by settling in native water were not measurably different in any of the six trace metals analyzed. Results of multiple comparison tests suggest that differences related to size-fractioning technique were evident in median copper, lead, and iron concentrations. Technique-related differences in copper concentrations most likely resulted from contamination of air-elutriated samples by a feed tip on the elutriator apparatus. No technique-related differences were observed in chromium, manganese, or zinc concentrations. Although air elutriation was the most expensive sizefractioning technique investigated, samples fractioned by this technique appeared to provide a superior level of discrimination between metal concentrations present in the bottom materials of the two sites. Sieving was an adequate lower-cost but more laborintensive alternative.

  18. Impact of crystalline defects and size on X-ray line broadening: A phenomenological approach for tetragonal SnO{sub 2} nanocrystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muhammed Shafi, P.; Chandra Bose, A., E-mail: acbose@nitt.edu

    2015-05-15

    Nanocrystalline tin oxide (SnO{sub 2}) powders with different grain size were prepared by chemical precipitation method. The reaction was carried out by varying the period of hydrolysis and the as-prepared samples were annealed at different temperatures. The samples were characterized using X-ray powder diffractometer and transmission electron microscopy. The microstrain and crystallite size were calculated for all the samples by using Williamson-Hall (W-H) models namely, isotropic strain model (ISM), anisotropic strain model (ASM) and uniform deformation energy density model (UDEDM). The morphology and particle size were determined using TEM micrographs. The directional dependant young’s modulus was modified as an equationmore » relating elastic compliances (s{sub ij}) and Miller indices of the lattice plane (hkl) for tetragonal crystal system and also the equation for elastic compliance in terms of stiffness constants was derived. The changes in crystallite size and microstrain due to lattice defects were observed while varying the hydrolysis time and the annealing temperature. The dependence of crystallite size on lattice strain was studied. The results were correlated with the available studies on electrical properties using impedance spectroscopy.« less

  19. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.

    2014-04-15

    Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample sizemore » required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence.« less

  20. Comparison of photon correlation spectroscopy with photosedimentation analysis for the determination of aqueous colloid size distributions

    USGS Publications Warehouse

    Rees, Terry F.

    1990-01-01

    Colloidal materials, dispersed phases with dimensions between 0.001 and 1 μm, are potential transport media for a variety of contaminants in surface and ground water. Characterization of these colloids, and identification of the parameters that control their movement, are necessary before transport simulations can be attempted. Two techniques that can be used to determine the particle-size distribution of colloidal materials suspended in natural waters are compared. Photon correlation Spectroscopy (PCS) utilizes the Doppler frequency shift of photons scattered off particles undergoing Brownian motion to determine the size of colloids suspended in water. Photosedimentation analysis (PSA) measures the time-dependent change in optical density of a suspension of colloidal particles undergoing centrifugation. A description of both techniques, important underlying assumptions, and limitations are given. Results for a series of river water samples show that the colloid-size distribution means are statistically identical as determined by both techniques. This also is true of the mass median diameter (MMD), even though MMD values determined by PSA are consistently smaller than those determined by PCS. Because of this small negative bias, the skew parameters for the distributions are generally smaller for the PCS-determined distributions than for the PSA-determined distributions. Smaller polydispersity indices for the distributions are also determined by PCS.

  1. Environmental DNA particle size distribution from Brook Trout (Salvelinus fontinalis)

    Treesearch

    Taylor M. Wilcox; Kevin S. McKelvey; Michael K. Young; Winsor H. Lowe; Michael K. Schwartz

    2015-01-01

    Environmental DNA (eDNA) sampling has become a widespread approach for detecting aquatic animals with high potential for improving conservation biology. However, little research has been done to determine the size of particles targeted by eDNA surveys. In this study, we conduct particle distribution analysis of eDNA from a captive Brook Trout (Salvelinus fontinalis) in...

  2. Relationship among School Size, School Culture and Students' Achievement at Secondary Level in Pakistan

    ERIC Educational Resources Information Center

    Ahmad Salfi, Naseer; Saeed, Muhammad

    2007-01-01

    Purpose: This paper seeks to determine the relationship among school size, school culture and students' achievement at secondary level in Pakistan. Design/methodology/approach: The study was descriptive (survey type). It was conducted on a sample of 90 secondary school head teachers and 540 primary, elementary and high school teachers working in…

  3. Cratering in glasses impacted by debris or micrometeorites

    NASA Technical Reports Server (NTRS)

    Wiedlocher, David E.; Kinser, Donald L.

    1993-01-01

    Mechanical strength measurements on five glasses and one glass-ceramic exposed on LDEF revealed no damage exceeding experimental limits of error. The measurement technique subjected less than 5 percent of the sample surface area to stresses above 90 percent of the failure strength. Seven micrometeorite or space debris impacts occurred at locations which were not in that portion of the sample subjected to greater than 90 percent of the applied stress. As a result of this, the impact events on the sample were not detected in the mechanical strength measurements. The physical form and structure of the impact sites was carefully examined to determine the influence of those events upon stress concentration associated with the impact and the resulting mechanical strength. The size of the impact site, insofar as it determines flaw size for fracture purposes, was examined. Surface topography of the impacts reveals that six of the seven sites display impact melting. The classical melt crater structure is surrounded by a zone of fractured glass. Residual stresses arising from shock compression and from cooling of the fused zone cannot be included in the fracture mechanics analyses based on simple flaw size measurements. Strategies for refining estimates of mechanical strength degradation by impact events are presented.

  4. How Students Cope with a Procedureless Lab Exercise.

    ERIC Educational Resources Information Center

    Pickering, Miles; Crabtree, Robert H.

    1979-01-01

    Reports a study conducted to determine how students cope with a procedureless laboratory situation in physical chemistry. Students are expected to use ingenuity, determine choice of sample size, conditions, and temperature extrapolation in an experiment on measuring heat of solution of an unknown salt. (Author/SA)

  5. Determination of hydrogen abundance in selected lunar soils

    NASA Technical Reports Server (NTRS)

    Bustin, Roberta

    1987-01-01

    Hydrogen was implanted in lunar soil through solar wind activity. In order to determine the feasibility of utilizing this solar wind hydrogen, it is necessary to know not only hydrogen abundances in bulk soils from a variety of locations but also the distribution of hydrogen within a given soil. Hydrogen distribution in bulk soils, grain size separates, mineral types, and core samples was investigated. Hydrogen was found in all samples studied. The amount varied considerably, depending on soil maturity, mineral types present, grain size distribution, and depth. Hydrogen implantation is definitely a surface phenomenon. However, as constructional particles are formed, previously exposed surfaces become embedded within particles, causing an enrichment of hydrogen in these species. In view of possibly extracting the hydrogen for use on the lunar surface, it is encouraging to know that hydrogen is present to a considerable depth and not only in the upper few millimeters. Based on these preliminary studies, extraction of solar wind hydrogen from lunar soil appears feasible, particulary if some kind of grain size separation is possible.

  6. Bed-material characteristics of the Sacramento–San Joaquin Delta, California, 2010–13

    USGS Publications Warehouse

    Marineau, Mathieu D.; Wright, Scott A.

    2017-02-10

    The characteristics of bed material at selected sites within the Sacramento–San Joaquin Delta, California, during 2010–13 are described in a study conducted by the U.S. Geological Survey in cooperation with the Bureau of Reclamation. During 2010‒13, six complete sets of samples were collected. Samples were initially collected at 30 sites; however, starting in 2012, samples were collected at 7 additional sites. These sites are generally collocated with an active streamgage. At all but one site, a separate bed-material sample was collected at three locations within the channel (left, right, and center). Bed-material samples were collected using either a US BMH–60 or a US BM–54 (for sites with higher stream velocity) cable-suspended, scoop sampler. Samples from each location were oven-dried and sieved. Bed material finer than 2 millimeters was subsampled using a sieving riffler and processed using a Beckman Coulter LS 13–320 laser diffraction particle-size analyzer. To determine the organic content of the bed material, the loss on ignition method was used for one subsample from each location. Particle-size distributions are presented as cumulative percent finer than a given size. Median and 90th-percentile particle size, and the percentage of subsample mass lost using the loss on ignition method for each sample are also presented in this report.

  7. Crystal Face Distributions and Surface Site Densities of Two Synthetic Goethites: Implications for Adsorption Capacities as a Function of Particle Size.

    PubMed

    Livi, Kenneth J T; Villalobos, Mario; Leary, Rowan; Varela, Maria; Barnard, Jon; Villacís-García, Milton; Zanella, Rodolfo; Goodridge, Anna; Midgley, Paul

    2017-09-12

    Two synthetic goethites of varying crystal size distributions were analyzed by BET, conventional TEM, cryo-TEM, atomic resolution STEM and HRTEM, and electron tomography in order to determine the effects of crystal size, shape, and atomic scale surface roughness on their adsorption capacities. The two samples were determined by BET to have very different site densities based on Cr VI adsorption experiments. Model specific surface areas generated from TEM observations showed that, based on size and shape, there should be little difference in their adsorption capacities. Electron tomography revealed that both samples crystallized with an asymmetric {101} tablet habit. STEM and HRTEM images showed a significant increase in atomic-scale surface roughness of the larger goethite. This difference in roughness was quantified based on measurements of relative abundances of crystal faces {101} and {201} for the two goethites, and a reactive surface site density was calculated for each goethite. Singly coordinated sites on face {210} are 2.5 more dense than on face {101}, and the larger goethite showed an average total of 36% {210} as compared to 14% for the smaller goethite. This difference explains the considerably larger adsorption capacitiy of the larger goethite vs the smaller sample and points toward the necessity of knowing the atomic scale surface structure in predicting mineral adsorption processes.

  8. QA/QC requirements for physical properties sampling and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Innis, B.E.

    1993-07-21

    This report presents results of an assessment of the available information concerning US Environmental Protection Agency (EPA) quality assurance/quality control (QA/QC) requirements and guidance applicable to sampling, handling, and analyzing physical parameter samples at Comprehensive Environmental Restoration, Compensation, and Liability Act (CERCLA) investigation sites. Geotechnical testing laboratories measure the following physical properties of soil and sediment samples collected during CERCLA remedial investigations (RI) at the Hanford Site: moisture content, grain size by sieve, grain size by hydrometer, specific gravity, bulk density/porosity, saturated hydraulic conductivity, moisture retention, unsaturated hydraulic conductivity, and permeability of rocks by flowing air. Geotechnical testing laboratories alsomore » measure the following chemical parameters of soil and sediment samples collected during Hanford Site CERCLA RI: calcium carbonate and saturated column leach testing. Physical parameter data are used for (1) characterization of vadose and saturated zone geology and hydrogeology, (2) selection of monitoring well screen sizes, (3) to support modeling and analysis of the vadose and saturated zones, and (4) for engineering design. The objectives of this report are to determine the QA/QC levels accepted in the EPA Region 10 for the sampling, handling, and analysis of soil samples for physical parameters during CERCLA RI.« less

  9. Determining shapes and dimensions of dental arches for the use of straight-wire arches in lingual technique.

    PubMed

    Kairalla, Silvana Allegrini; Scuzzo, Giuseppe; Triviño, Tarcila; Velasco, Leandro; Lombardo, Luca; Paranhos, Luiz Renato

    2014-01-01

    This study aims to determine the shape and dimension of dental arches from a lingual perspective, and determine shape and size of a straight archwire used for lingual Orthodontics. The study sample comprised 70 Caucasian Brazilian individuals with normal occlusion and at least four of Andrew's six keys. Maxillary and mandibular dental casts were digitized (3D) and the images were analyzed by Delcam Power SHAPET 2010 software. Landmarks on the lingual surface of teeth were selected and 14 measurements were calculated to determine the shape and size of dental arches. Shapiro-Wilk test determined small arch shape by means of 25th percentile (P25%)--an average percentile for the medium arch; and a large one determined by means of 75th percentile (P75%). T-test revealed differences between males and females in the size of 12 dental arches. The straight-wire arch shape used in the lingual straight wire technique is a parabolic-shaped arch, slightly flattened on its anterior portion. Due to similarity among dental arch sizes shown by males and females, a more simplified diagram chart was designed.

  10. Standard-less analysis of Zircaloy clad samples by an instrumental neutron activation method

    NASA Astrophysics Data System (ADS)

    Acharya, R.; Nair, A. G. C.; Reddy, A. V. R.; Goswami, A.

    2004-03-01

    A non-destructive method for analysis of irregular shape and size samples of Zircaloy has been developed using the recently standardized k0-based internal mono standard instrumental neutron activation analysis (INAA). The samples of Zircaloy-2 and -4 tubes, used as fuel cladding in Indian boiling water reactors (BWR) and pressurized heavy water reactors (PHWR), respectively, have been analyzed. Samples weighing in the range of a few tens of grams were irradiated in the thermal column of Apsara reactor to minimize neutron flux perturbations and high radiation dose. The method utilizes in situ relative detection efficiency using the γ-rays of selected activation products in the sample for overcoming γ-ray self-attenuation. Since the major and minor constituents (Zr, Sn, Fe, Cr and/or Ni) in these samples were amenable to NAA, the absolute concentrations of all the elements were determined using mass balance instead of using the concentration of the internal mono standard. Concentrations were also determined in a smaller size Zircaloy-4 sample by irradiating in the core position of the reactor to validate the present methodology. The results were compared with literature specifications and were found to be satisfactory. Values of sensitivities and detection limits have been evaluated for the elements analyzed.

  11. Summary of sediment data from the Yampa river and upper Green river basins, Colorado and Utah, 1993-2002

    USGS Publications Warehouse

    Elliott, John G.; Anders, Steven P.

    2004-01-01

    The water resources of the Upper Colorado River Basin have been extensively developed for water supply, irrigation, and power generation through water storage in upstream reservoirs during spring runoff and subsequent releases during the remainder of the year. The net effect of water-resource development has been to substantially modify the predevelopment annual hydrograph as well as the timing and amount of sediment delivery from the upper Green River and the Yampa River Basins tributaries to the main-stem reaches where endangered native fish populations have been observed. The U.S. Geological Survey, in cooperation with the Colorado Division of Wildlife and the U.S. Fish and Wildlife Service, began a study to identify sediment source reaches in the Green River main stem and the lower Yampa and Little Snake Rivers and to identify sediment-transport relations that would be useful in assessing the potential effects of hydrograph modification by reservoir operation on sedimentation at identified razorback spawning bars in the Green River. The need for additional data collection is evaluated at each sampling site. Sediment loads were calculated at five key areas within the watershed by using instantaneous measurements of streamflow, suspended-sediment concentration, and bedload. Sediment loads were computed at each site for two modes of transport (suspended load and bedload), as well as for the total-sediment load (suspended load plus bedload) where both modes were sampled. Sediment loads also were calculated for sediment particle-size range (silt-and-clay, and sand-and-gravel sizes) if laboratory size analysis had been performed on the sample, and by hydrograph season. Sediment-transport curves were developed for each type of sediment load by a least-squares regression of logarithmic-transformed data. Transport equations for suspended load and total load had coefficients of determination of at least 0.72 at all of the sampling sites except Little Snake River near Lily, Colorado. Bedload transport equations at the five sites had coefficients of determination that ranged from 0.40 (Yampa River at Deerlodge Park, Colorado) to 0.80 (Yampa River above Little Snake River near Maybell, Colorado). Transport equations for silt and clay-size material had coefficients of determination that ranged from 0.46 to 0.82. Where particle-size data were available (Yampa River at Deerlodge Park, Colorado, and Green River near Jensen, Utah), transport equations for the smaller particle sizes (fine sand) tended to have higher coefficients of determination than the equations for coarser sizes (medium and coarse sand, and very coarse sand and gravel). Because the data had to be subdivided into at least two subsets (rising-limb, falling-limb and, occasionally, base-flow periods), the seasonal transport equations generally were based on relatively few samples. All transport equations probably could be improved by additional data collected at strategically timed periods.

  12. Number of pins in two-stage stratified sampling for estimating herbage yield

    Treesearch

    William G. O' Regan; C. Eugene Conrad

    1975-01-01

    In a two-stage stratified procedure for sampling herbage yield, plots are stratified by a pin frame in stage one, and clipped. In stage two, clippings from selected plots are sorted, dried, and weighed. Sample size and distribution of plots between the two stages are determined by equations. A way to compute the effect of number of pins on the variance of estimated...

  13. Standard test method for grindability of coal by the hardgrove-machine method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1975-01-01

    This method is used to determine the relative grindability or ease of pulverization of coals in comparison with coals chosen as standards. A prepared sample receives a definite amount of grinding energy in a miniature pulverizer, and the change in size consist is determined by sieving.

  14. Evaluation of sampling plans to detect Cry9C protein in corn flour and meal.

    PubMed

    Whitaker, Thomas B; Trucksess, Mary W; Giesbrecht, Francis G; Slate, Andrew B; Thomas, Francis S

    2004-01-01

    StarLink is a genetically modified corn that produces an insecticidal protein, Cry9C. Studies were conducted to determine the variability and Cry9C distribution among sample test results when Cry9C protein was estimated in a bulk lot of corn flour and meal. Emphasis was placed on measuring sampling and analytical variances associated with each step of the test procedure used to measure Cry9C in corn flour and meal. Two commercially available enzyme-linked immunosorbent assay kits were used: one for the determination of Cry9C protein concentration and the other for % StarLink seed. The sampling and analytical variances associated with each step of the Cry9C test procedures were determined for flour and meal. Variances were found to be functions of Cry9C concentration, and regression equations were developed to describe the relationships. Because of the larger particle size, sampling variability associated with cornmeal was about double that for corn flour. For cornmeal, the sampling variance accounted for 92.6% of the total testing variability. The observed sampling and analytical distributions were compared with the Normal distribution. In almost all comparisons, the null hypothesis that the Cry9C protein values were sampled from a Normal distribution could not be rejected at 95% confidence limits. The Normal distribution and the variance estimates were used to evaluate the performance of several Cry9C protein sampling plans for corn flour and meal. Operating characteristic curves were developed and used to demonstrate the effect of increasing sample size on reducing false positives (seller's risk) and false negatives (buyer's risk).

  15. Measurement of particulates

    NASA Technical Reports Server (NTRS)

    Woods, D.

    1980-01-01

    The size distributions of particles in the exhaust plumes from the Titan rockets launched in August and September 1977 were determined from in situ measurements made from a small sampling aircraft that flew through the plumes. Two different sampling instruments were employed, a quartz crystal microbalance (QCM) cascade impactor and a forward scattering spectrometer probe (FSSP). The QCM measured the nonvolatile component of the aerosols in the plume covering an aerodynamic size ranging from 0.05 to 25 micrometers diameter. The FSSP, flown outside the aircraft under the nose section, measured both the liquid droplets and the solid particles over a size range from 0.5 to 7.5 micrometers in diameter. The particles were counted and classified into 15 size intervals. The presence of a large number of liquid droplets in the exhaust clouds is discussed and data are plotted for each launch and compared.

  16. Device for high spatial resolution chemical analysis of a sample and method of high spatial resolution chemical analysis

    DOEpatents

    Van Berkel, Gary J.

    2015-10-06

    A system and method for analyzing a chemical composition of a specimen are described. The system can include at least one pin; a sampling device configured to contact a liquid with a specimen on the at least one pin to form a testing solution; and a stepper mechanism configured to move the at least one pin and the sampling device relative to one another. The system can also include an analytical instrument for determining a chemical composition of the specimen from the testing solution. In particular, the systems and methods described herein enable chemical analysis of specimens, such as tissue, to be evaluated in a manner that the spatial-resolution is limited by the size of the pins used to obtain tissue samples, not the size of the sampling device used to solubilize the samples coupled to the pins.

  17. Laboratory measurements of electric properties of composite mine dump samples from Colorado and New Mexico

    USGS Publications Warehouse

    Anderson, Anita L.; Campbell, David L.; Beanland, Shay

    2001-01-01

    Individual mine waste samples were collected and combined to form one composite sample at each of eight mine dump sites in Colorado and New Mexico. The samples were air-dried and sieved to determine the geochemical composition of their <2mm size fraction. Splits of the samples were then rehydrated and their electrical properties were measured in the US Geological Survey Petrophysical Laboratory, Denver, Colorado (PetLab). The PetLab measurements were done twice: in 1999, using convenient amounts of rehydration water ranging from 5% to 8%; and in 2000, using carefully controlled rehydrations to 5% and 10% water. This report gives geochemical analyses of the <2mm size fraction of the composite samples (Appendix A), PetLab graphs of the 1999 measurements (Appendix B), Petlab graphs of the 2000 measurements (Appendix C), and Cole-Cole models of the PetLab data from the 2000 measurements (Appendix D).

  18. In Situ Balloon-Borne Ice Particle Imaging in High-Latitude Cirrus

    NASA Astrophysics Data System (ADS)

    Kuhn, Thomas; Heymsfield, Andrew J.

    2016-09-01

    Cirrus clouds reflect incoming solar radiation, creating a cooling effect. At the same time, these clouds absorb the infrared radiation from the Earth, creating a greenhouse effect. The net effect, crucial for radiative transfer, depends on the cirrus microphysical properties, such as particle size distributions and particle shapes. Knowledge of these cloud properties is also needed for calibrating and validating passive and active remote sensors. Ice particles of sizes below 100 µm are inherently difficult to measure with aircraft-mounted probes due to issues with resolution, sizing, and size-dependent sampling volume. Furthermore, artefacts are produced by shattering of particles on the leading surfaces of the aircraft probes when particles several hundred microns or larger are present. Here, we report on a series of balloon-borne in situ measurements that were carried out at a high-latitude location, Kiruna in northern Sweden (68N 21E). The method used here avoids these issues experienced with the aircraft probes. Furthermore, with a balloon-borne instrument, data are collected as vertical profiles, more useful for calibrating or evaluating remote sensing measurements than data collected along horizontal traverses. Particles are collected on an oil-coated film at a sampling speed given directly by the ascending rate of the balloon, 4 m s-1. The collecting film is advanced uniformly inside the instrument so that an always unused section of the film is exposed to ice particles, which are measured by imaging shortly after sampling. The high optical resolution of about 4 µm together with a pixel resolution of 1.65 µm allows particle detection at sizes of 10 µm and larger. For particles that are 20 µm (12 pixel) in size or larger, the shape can be recognized. The sampling volume, 130 cm3 s-1, is well defined and independent of particle size. With the encountered number concentrations of between 4 and 400 L-1, this required about 90- to 4-s sampling times to determine particle size distributions of cloud layers. Depending on how ice particles vary through the cloud, several layers per cloud with relatively uniform properties have been analysed. Preliminary results of the balloon campaign, targeting upper tropospheric, cold cirrus clouds, are presented here. Ice particles in these clouds were predominantly very small, with a median size of measured particles of around 50 µm and about 80 % of all particles below 100 µm in size. The properties of the particle size distributions at temperatures between -36 and -67 °C have been studied, as well as particle areas, extinction coefficients, and their shapes (area ratios). Gamma and log-normal distribution functions could be fitted to all measured particle size distributions achieving very good correlation with coefficients R of up to 0.95. Each distribution features one distinct mode. With decreasing temperature, the mode diameter decreases exponentially, whereas the total number concentration increases by two orders of magnitude with decreasing temperature in the same range. The high concentrations at cold temperatures also caused larger extinction coefficients, directly determined from cross-sectional areas of single ice particles, than at warmer temperatures. The mass of particles has been estimated from area and size. Ice water content (IWC) and effective diameters are then determined from the data. IWC did vary only between 1 × 10-3 and 5 × 10-3 g m-3 at temperatures below -40 °C and did not show a clear temperature trend. These measurements are part of an ongoing study.

  19. Chemical Characterization of an Envelope A Sample from Hanford Tank 241-AN-103

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hay, M.S.

    2000-08-23

    A whole tank composite sample from Hanford waste tank 241-AN-103 was received at the Savannah River Technology Center (SRTC) and chemically characterized. Prior to characterization the sample was diluted to {approximately}5 M sodium concentration. The filtered supernatant liquid, the total dried solids of the diluted sample, and the washed insoluble solids obtained from filtration of the diluted sample were analyzed. A mass balance calculation of the three fractions of the sample analyzed indicate the analytical results appear relatively self-consistent for major components of the sample. However, some inconsistency was observed between results where more than one method of determination wasmore » employed and for species present in low concentrations. A direct comparison to previous analyses of material from tank 241-AN-103 was not possible due to unavailability of data for diluted samples of tank 241-AN-103 whole tank composites. However, the analytical data for other types of samples from 241-AN-103 we re mathematically diluted and compare reasonably with the current results. Although the segments of the core samples used to prepare the sample received at SRTC were combined in an attempt to produce a whole tank composite, determination of how well the results of the current analysis represent the actual composition of the Hanford waste tank 241-AN-103 remains problematic due to the small sample size and the large size of the non-homogenized waste tank.« less

  20. Sediment loads and transport at constructed chutes along the Missouri River - Upper Hamburg Chute near Nebraska City, Nebraska, and Kansas Chute near Peru, Nebraska

    USGS Publications Warehouse

    Densmore, Brenda K.; Rus, David L.; Moser, Matthew T.; Hall, Brent M.; Andersen, Michael J.

    2016-02-04

    Comparisons of concentrations and loads from EWI samples collected from different transects within a study site resulted in few significant differences, but comparisons are limited by small sample sizes and large within-transect variability. When comparing the Missouri River upstream transect to the chute inlet transect, similar results were determined in 2012 as were determined in 2008—the chute inlet affected the amount of sediment entering the chute from the main channel. In addition, the Kansas chute is potentially affecting the sediment concentration within the Missouri River main channel, but small sample size and construction activities within the chute limit the ability to fully understand either the effect of the chute in 2012 or the effect of the chute on the main channel during a year without construction. Finally, some differences in SSC were detected between the Missouri River upstream transects and the chute downstream transects; however, the effect of the chutes on the Missouri River main-channel sediment transport was difficult to isolate because of construction activities and sampling variability.

  1. Code Saturation Versus Meaning Saturation: How Many Interviews Are Enough?

    PubMed

    Hennink, Monique M; Kaiser, Bonnie N; Marconi, Vincent C

    2017-03-01

    Saturation is a core guiding principle to determine sample sizes in qualitative research, yet little methodological research exists on parameters that influence saturation. Our study compared two approaches to assessing saturation: code saturation and meaning saturation. We examined sample sizes needed to reach saturation in each approach, what saturation meant, and how to assess saturation. Examining 25 in-depth interviews, we found that code saturation was reached at nine interviews, whereby the range of thematic issues was identified. However, 16 to 24 interviews were needed to reach meaning saturation where we developed a richly textured understanding of issues. Thus, code saturation may indicate when researchers have "heard it all," but meaning saturation is needed to "understand it all." We used our results to develop parameters that influence saturation, which may be used to estimate sample sizes for qualitative research proposals or to document in publications the grounds on which saturation was achieved.

  2. Binomial Test Method for Determining Probability of Detection Capability for Fracture Critical Applications

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    2011-01-01

    The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.

  3. Mass size distribution of particle-bound water

    NASA Astrophysics Data System (ADS)

    Canepari, S.; Simonetti, G.; Perrino, C.

    2017-09-01

    The thermal-ramp Karl-Fisher method (tr-KF) for the determination of PM-bound water has been applied to size-segregated PM samples collected in areas subjected to different environmental conditions (protracted atmospheric stability, desert dust intrusion, urban atmosphere). This method, based on the use of a thermal ramp for the desorption of water from PM samples and the subsequent analysis by the coulometric KF technique, had been previously shown to differentiate water contributes retained with different strength and associated to different chemical components in the atmospheric aerosol. The application of the method to size-segregated samples has revealed that water showed a typical mass size distribution in each one of the three environmental situations that were taken into consideration. A very similar size distribution was shown by the chemical PM components that prevailed during each event: ammonium nitrate in the case of atmospheric stability, crustal species in the case of desert dust, road-dust components in the case of urban sites. The shape of the tr-KF curve varied according to the size of the collected particles. Considering the size ranges that better characterize the event (fine fraction for atmospheric stability, coarse fraction for dust intrusion, bi-modal distribution for urban dust), this shape is coherent with the typical tr-KF shape shown by water bound to the chemical species that predominate in the same PM size range (ammonium nitrate, crustal species, secondary/combustion species - road dust components).

  4. Particle size distribution of distillers dried grains with solubles (DDGS) and relationships to compositional and color properties.

    PubMed

    Liu, Keshun

    2008-11-01

    Eleven distillers dried grains with solubles (DDGS), processed from yellow corn, were collected from different ethanol processing plants in the US Midwest area. Particle size distribution (PSD) by mass of each sample was determined using a series of six selected US standard sieves: Nos. 8, 12, 18, 35, 60, and 100, and a pan. The original sample and sieve sized fractions were measured for surface color and contents of moisture, protein, oil, ash, and starch. Total carbohydrate (CHO) and total non-starch CHO were also calculated. Results show that there was a great variation in composition and color among DDGS from different plants. Surprisingly, a few DDGS samples contained unusually high amounts of residual starch (11.1-17.6%, dry matter basis, vs. about 5% of the rest), presumably resulting from modified processing methods. Particle size of DDGS varied greatly within a sample and PSD varied greatly among samples. The 11 samples had a mean value of 0.660mm for the geometric mean diameter (dgw) of particles and a mean value of 0.440mm for the geometric standard deviation (Sgw) of particle diameters by mass. The majority had a unimodal PSD, with a mode in the size class between 0.5 and 1.0mm. Although PSD and color parameters had little correlation with composition of whole DDGS samples, distribution of nutrients as well as color attributes correlated well with PSD. In sieved fractions, protein content, L and a color values negatively while contents of oil and total CHO positively correlated with particle size. It is highly feasible to fractionate DDGS for compositional enrichment based on particle size, while the extent of PSD can serve as an index for potential of DDGS fractionation. The above information should be a vital addition to quality and baseline data of DDGS.

  5. A Comparison of Mission Statements of National Blue Ribbon Schools and Unacceptable Texas High Schools

    ERIC Educational Resources Information Center

    Perfetto, John Charles; Holland, Glenda; Davis, Rebecca; Fedynich, La Vonne

    2013-01-01

    This study was conducted to determine the themes present in the context of high schools, to determine any significant differences in themes for high and low performing high schools, and to determine if significant differences were present for the same sample of high schools based on school size. An analysis of the content of mission statements…

  6. US Food and Drug Administration survey of methyl mercury in canned tuna

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yess, J.

    1993-01-01

    Methyl mercury was determined by the US Food and Drug Administration (FDA) in 220 samples of canned tuna collected in 1991. Samples were chosen to represent different styles, colors, and packs as available. Emphasis was placed on water-packed tuna, small can size, and the highest-volume brand names. The average methyl mercury (expressed as Hg) found for the 220 samples was 0.17 ppm; the range was <0.10-0.75 ppm. Statistically, a significantly higher level of methyl mercury was found in solid white and chunk tuna. Methyl mercury level was not related to can size. None of the 220 samples had methyl mercurymore » levels that exceeded the 1 ppm FDA action level. 11 refs., 1 tab.« less

  7. Numerical sedimentation particle-size analysis using the Discrete Element Method

    NASA Astrophysics Data System (ADS)

    Bravo, R.; Pérez-Aparicio, J. L.; Gómez-Hernández, J. J.

    2015-12-01

    Sedimentation tests are widely used to determine the particle size distribution of a granular sample. In this work, the Discrete Element Method interacts with the simulation of flow using the well known one-way-coupling method, a computationally affordable approach for the time-consuming numerical simulation of the hydrometer, buoyancy and pipette sedimentation tests. These tests are used in the laboratory to determine the particle-size distribution of fine-grained aggregates. Five samples with different particle-size distributions are modeled by about six million rigid spheres projected on two-dimensions, with diameters ranging from 2.5 ×10-6 m to 70 ×10-6 m, forming a water suspension in a sedimentation cylinder. DEM simulates the particle's movement considering laminar flow interactions of buoyant, drag and lubrication forces. The simulation provides the temporal/spatial distributions of densities and concentrations of the suspension. The numerical simulations cannot replace the laboratory tests since they need the final granulometry as initial data, but, as the results show, these simulations can identify the strong and weak points of each method and eventually recommend useful variations and draw conclusions on their validity, aspects very difficult to achieve in the laboratory.

  8. Analytical template protection performance and maximum key size given a Gaussian-modeled biometric source

    NASA Astrophysics Data System (ADS)

    Kelkboom, Emile J. C.; Breebaart, Jeroen; Buhan, Ileana; Veldhuis, Raymond N. J.

    2010-04-01

    Template protection techniques are used within biometric systems in order to protect the stored biometric template against privacy and security threats. A great portion of template protection techniques are based on extracting a key from or binding a key to a biometric sample. The achieved protection depends on the size of the key and its closeness to being random. In the literature it can be observed that there is a large variation on the reported key lengths at similar classification performance of the same template protection system, even when based on the same biometric modality and database. In this work we determine the analytical relationship between the system performance and the theoretical maximum key size given a biometric source modeled by parallel Gaussian channels. We consider the case where the source capacity is evenly distributed across all channels and the channels are independent. We also determine the effect of the parameters such as the source capacity, the number of enrolment and verification samples, and the operating point selection on the maximum key size. We show that a trade-off exists between the privacy protection of the biometric system and its convenience for its users.

  9. Temporal change in the size distribution of airborne Radiocesium derived from the Fukushima accident

    NASA Astrophysics Data System (ADS)

    Kaneyasu, Naoki; Ohashi, Hideo; Suzuki, Fumie; Okuda, Tomoaki; Ikemori, Fumikazu; Akata, Naofumi

    2013-04-01

    The accident of Fukushima Dai-ichi nuclear power plant discharged a large amount of radioactive materials into the environment. After 40 days of the accident, we started to collect the size-segregated aerosol at Tsukuba City, Japan, located 170 km south of the plant, by use of a low-pressure cascade impactor. The sampling continued from April 28, through October 26, 2011. The number of sample sets collected in total was 8. The radioactivity of 134Cs and 137Cs in aerosols collected at each stage were determined by gamma-ray with a high sensitivity Germanic detector. After the gamma-ray spectrometry analysis, the chemical species in the aerosols were analyzed. The analyses of first (April 28-May 12) and second (May 12-26) samples showed that the activity size distributions of 134Cs and 137Cs in aerosols reside mostly in the accumulation mode size range. These activity size distributions almost overlapped with the mass size distribution of non-sea-salt sulfate aerosol. From the results, we regarded that sulfate is the main transport medium of these radionuclides, and re-suspended soil particles that attached radionuclides were not the major airborne radioactive substances by the end of May, 2011 (Kaneyasu et al., 2012). We further conducted the successive extraction experiment of radiocesium from the aerosol deposits on the aluminum sheet substrate (8th stage of the first aerosol sample, 0.5-0.7 μm in aerodynamic diameter) with water and 0.1M HCl. In contrast to the relatively insoluble property of Chernobyl radionuclides, those in aerosols collected at Tsukuba in fine mode are completely water-soluble (100%). From the third aerosol sample, the activity size distributions started to change, i.e., the major peak in the accumulation mode size range seen in the first and second aerosol samples became smaller and an additional peak appeared in the coarse mode size range. The comparison of the activity size distributions of radiocesium and the mass size distributions of major aerosol components collected by the end of August, 2011, (i.e., sample No.5) and its implication will be discussed in the presentation. Reference Kaneyasu et al., Environ. Sci. Technol. 46, 5720-5726 (2012).

  10. Affected States Soft Independent Modeling by Class Analogy from the Relation Between Independent Variables, Number of Independent Variables and Sample Size

    PubMed Central

    Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, İrem Ersöz

    2013-01-01

    Objective: The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Study Design: Simulation study. Material and Methods: SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Results: Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. Conclusion: It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values. PMID:25207065

  11. Affected States soft independent modeling by class analogy from the relation between independent variables, number of independent variables and sample size.

    PubMed

    Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, Irem Ersöz

    2013-03-01

    The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Simulation study. SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values.

  12. Highly accurate adaptive TOF determination method for ultrasonic thickness measurement

    NASA Astrophysics Data System (ADS)

    Zhou, Lianjie; Liu, Haibo; Lian, Meng; Ying, Yangwei; Li, Te; Wang, Yongqing

    2018-04-01

    Determining the time of flight (TOF) is very critical for precise ultrasonic thickness measurement. However, the relatively low signal-to-noise ratio (SNR) of the received signals would induce significant TOF determination errors. In this paper, an adaptive time delay estimation method has been developed to improve the TOF determination’s accuracy. An improved variable step size adaptive algorithm with comprehensive step size control function is proposed. Meanwhile, a cubic spline fitting approach is also employed to alleviate the restriction of finite sampling interval. Simulation experiments under different SNR conditions were conducted for performance analysis. Simulation results manifested the performance advantage of proposed TOF determination method over existing TOF determination methods. When comparing with the conventional fixed step size, and Kwong and Aboulnasr algorithms, the steady state mean square deviation of the proposed algorithm was generally lower, which makes the proposed algorithm more suitable for TOF determination. Further, ultrasonic thickness measurement experiments were performed on aluminum alloy plates with various thicknesses. They indicated that the proposed TOF determination method was more robust even under low SNR conditions, and the ultrasonic thickness measurement accuracy could be significantly improved.

  13. Utilization of Yatagan Power Plant Fly Ash in Production of Building Bricks

    NASA Astrophysics Data System (ADS)

    Önel, Öznur; Tanriverdi, Mehmet; Cicek, Tayfun

    2017-12-01

    Fly ash is a by-product of coal combustion, which accumulates in large quantities near the coal-fired power plants as waste material. Fly ash causes serious operational and environmental problems. In this study, fly ash from Yatağgan thermal power plant was used to produce light-weight building bricks. The study aimed to reduce the problems related to fly ash by creating a new area for their use. The optimum process parameters were determined for the production of real size bricks to be used in construction industry. The commercial size bricks (200 × 200 × 90-110 mm) were manufactured using pilot size equipment. Mechanical properties, thermal conductivity coefficients, freezing and thawing strengths, water absorption rates, and unit volume weights of the bricks were determined. Etringite (Ca6Al2 (SO4)3 (OH)12 25(H2O)) and Calcium Silicate Hydrate (2CaO.SiO2.4H2O) were identified as the binding phases in the real size brick samples after 2 days of pre-curing and 28 days curing at 50° C and 95% relative moisture. The water absorption rate was found to be 27.7 % in terms of mass. The mechanical and bending strength of the brick samples with unit volume weight of 1.29 g.cm-3 were determined as 6.75 MPa and 1,56 MPa respectively. The thermal conductivity of the fly ash bricks was measured in average as 0,340 W m-1 K-1. The fly ash sample produced was subjected to toxic leaching tests (Toxic Property Leaching Procedure (EPA-TCLP 1311), Single-step BATCH Test and Method-A Disintegration Procedure (ASTM)). The results of these tests suggested that the materials could be classified as non-hazardous wastes / materials.

  14. Evaluation of Low-Gravity Smoke Particulate for Spacecraft Fire Detection

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Mulholland George; Meyer, Marit; Yuan, Zeng guang; Cleary, Thomas; Yang, Jiann; Greenberg, Paul; Bryg, Victoria

    2013-01-01

    Tests were conducted on the International Space Station to evaluate the smoke particulate size from materials and conditions that are typical of those expected in spacecraft fires. Five different materials representative of those found in spacecraft (Teflon, Kapton, cotton, silicone rubber and Pyrell) were heated to temperatures below the ignition point with conditions controlled to provide repeatable sample surface temperatures and air flow. The air flow past the sample during the heating period ranged from quiescent to 8 cm/s. The effective transport time to the measurement instruments was varied from 11 to 800 seconds to simulate different smoke transport conditions in spacecraft. The resultant aerosol was evaluated by three instruments which measured different moments of the particle size distribution. These moment diagnostics were used to determine the particle number concentration (zeroth moment), the diameter concentration (first moment), and the mass concentration (third moment). These statistics were combined to determine the diameter of average mass and the count mean diameter and by assuming a log-normal distribution, the geometric mean diameter and the geometric standard deviations were also calculated. Smoke particle samples were collected on TEM grids using a thermal precipitator for post flight analysis. The TEM grids were analyzed to determine the particle morphology and shape parameters. The different materials produced particles with significantly different morphologies. Overall the majority of the average smoke particle sizes were found to be in the 200 to 400 nanometer range with the quiescent cases and the cases with increased transport time typically producing with substantially larger particles. The results varied between materials but the smoke particles produced in low gravity were typically twice the size of particles produced in normal gravity. These results can be used to establish design requirements for future spacecraft smoke detectors.

  15. Transport of dissolved organic matter in Boom Clay: Size effects

    NASA Astrophysics Data System (ADS)

    Durce, D.; Aertsens, M.; Jacques, D.; Maes, N.; Van Gompel, M.

    2018-01-01

    A coupled experimental-modelling approach was developed to evaluate the effects of molecular weight (MW) of dissolved organic matter (DOM) on its transport through intact Boom Clay (BC) samples. Natural DOM was sampled in-situ in the BC layer. Transport was investigated with percolation experiments on 1.5 cm BC samples by measuring the outflow MW distribution (MWD) by size exclusion chromatography (SEC). A one-dimensional reactive transport model was developed to account for retardation, diffusion and entrapment (attachment and/or straining) of DOM. These parameters were determined along the MWD by implementing a discretisation of DOM into several MW points and modelling the breakthrough of each point. The pore throat diameter of BC was determined as 6.6-7.6 nm. Below this critical size, transport of DOM is MW dependent and two major types of transport were identified. Below MW of 2 kDa, DOM was neither strongly trapped nor strongly retarded. This fraction had an averaged capacity factor of 1.19 ± 0.24 and an apparent dispersion coefficient ranging from 7.5 × 10- 11 to 1.7 × 10- 11 m2/s with increasing MW. DOM with MW > 2 kDa was affected by both retardation and straining that increased significantly with increasing MW while apparent dispersion coefficients decreased. Values ranging from 1.36 to 19.6 were determined for the capacity factor and 3.2 × 10- 11 to 1.0 × 10- 11 m2/s for the apparent dispersion coefficient for species with 2.2 kDa < MW < 9.3 kDa. Straining resulted in an immobilisation of in average 49 ± 6% of the injected 9.3 kDa species. Our findings show that an accurate description of DOM transport requires the consideration of the size effects.

  16. Spectro-microscopic Characterization of Physical Properties and Phase Separations in Individual Atmospheric Particles

    NASA Astrophysics Data System (ADS)

    OBrien, R. E.; Wang, B.; Neu, A.; Kelly, S. T.; Lundt, N.; Epstein, S. A.; MacMillan, A.; You, Y.; Laskin, A.; Nizkorodov, S.; Bertram, A. K.; Moffet, R.; Gilles, M.

    2013-12-01

    The phase state and liquid-liquid phase separations of ambient and laboratory generated aerosol particles were investigated using (1) scanning transmission x-ray microscopy/near-edge x-ray absorption fine structure spectroscopy (STXM/NEXAFS) coupled to a relative humidity (RH) controlled in-situ chamber and (2) environmental scanning electron microscopy (ESEM). The phase states of the particles were determined from measurements of their size and optical density. A comparison is made between the observed phase states of ambient samples and of laboratory generated aerosols to determine how well laboratory samples represent the phase of ambient samples. In addition, liquid-liquid phase separations in laboratory generated particles were investigated. Preliminary results showing that liquid-liquid phase separations occur at RH's between the deliquescence and efflorescence points and that the organic phase surrounds the inorganic phase will be presented. The STXM/NEXAFS technique provides insight into the degree of mixing at the deliquescence point and the degree of phase separation for particles of atmospherically relevant sizes.

  17. Estimating sample size for landscape-scale mark-recapture studies of North American migratory tree bats

    USGS Publications Warehouse

    Ellison, Laura E.; Lukacs, Paul M.

    2014-01-01

    Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.

  18. Size-exclusion chromatography for the determination of the boiling point distribution of high-boiling petroleum fractions.

    PubMed

    Boczkaj, Grzegorz; Przyjazny, Andrzej; Kamiński, Marian

    2015-03-01

    The paper describes a new procedure for the determination of boiling point distribution of high-boiling petroleum fractions using size-exclusion chromatography with refractive index detection. Thus far, the determination of boiling range distribution by chromatography has been accomplished using simulated distillation with gas chromatography with flame ionization detection. This study revealed that in spite of substantial differences in the separation mechanism and the detection mode, the size-exclusion chromatography technique yields similar results for the determination of boiling point distribution compared with simulated distillation and novel empty column gas chromatography. The developed procedure using size-exclusion chromatography has a substantial applicability, especially for the determination of exact final boiling point values for high-boiling mixtures, for which a standard high-temperature simulated distillation would have to be used. In this case, the precision of final boiling point determination is low due to the high final temperatures of the gas chromatograph oven and an insufficient thermal stability of both the gas chromatography stationary phase and the sample. Additionally, the use of high-performance liquid chromatography detectors more sensitive than refractive index detection allows a lower detection limit for high-molar-mass aromatic compounds, and thus increases the sensitivity of final boiling point determination. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Microcystin distribution in physical size class separations of natural plankton communities

    USGS Publications Warehouse

    Graham, J.L.; Jones, J.R.

    2007-01-01

    Phytoplankton communities in 30 northern Missouri and Iowa lakes were physically separated into 5 size classes (>100 ??m, 53-100 ??m, 35-53 ??m, 10-35 ??m, 1-10 ??m) during 15-21 August 2004 to determine the distribution of microcystin (MC) in size fractionated lake samples and assess how net collections influence estimates of MC concentration. MC was detected in whole water (total) from 83% of takes sampled, and total MC values ranged from 0.1-7.0 ??g/L (mean = 0.8 ??g/L). On average, MC in the > 100 ??m size class comprised ???40% of total MC, while other individual size classes contributed 9-20% to total MC. MC values decreased with size class and were significantly greater in the >100 ??m size class (mean = 0.5 ??g /L) than the 35-53 ??m (mean = 0.1 ??g/L), 10-35 ??m (mean = 0.0 ??g/L), and 1-10 ??m (mean = 0.0 ??g/L) size classes (p < 0.01). MC values in nets with 100-??m, 53-??m, 35-??m, and 10-??m mesh were cumulatively summed to simulate the potential bias of measuring MC with various size plankton nets. On average, a 100-??m net underestimated total MC by 51%, compared to 37% for a 53-??m net, 28% for a 35-??m net, and 17% for a 10-??m net. While plankton nets consistently underestimated total MC, concentration of algae with net sieves allowed detection of MC at low levels (???0.01 ??/L); 93% of lakes had detectable levels of MC in concentrated samples. Thus, small mesh plankton nets are an option for documenting MC occurrence, but whole water samples should be collected to characterize total MC concentrations. ?? Copyright by the North American Lake Management Society 2007.

  20. Selection of the effect size for sample size determination for a continuous response in a superiority clinical trial using a hybrid classical and Bayesian procedure.

    PubMed

    Ciarleglio, Maria M; Arendt, Christopher D; Peduzzi, Peter N

    2016-06-01

    When designing studies that have a continuous outcome as the primary endpoint, the hypothesized effect size ([Formula: see text]), that is, the hypothesized difference in means ([Formula: see text]) relative to the assumed variability of the endpoint ([Formula: see text]), plays an important role in sample size and power calculations. Point estimates for [Formula: see text] and [Formula: see text] are often calculated using historical data. However, the uncertainty in these estimates is rarely addressed. This article presents a hybrid classical and Bayesian procedure that formally integrates prior information on the distributions of [Formula: see text] and [Formula: see text] into the study's power calculation. Conditional expected power, which averages the traditional power curve using the prior distributions of [Formula: see text] and [Formula: see text] as the averaging weight, is used, and the value of [Formula: see text] is found that equates the prespecified frequentist power ([Formula: see text]) and the conditional expected power of the trial. This hypothesized effect size is then used in traditional sample size calculations when determining sample size for the study. The value of [Formula: see text] found using this method may be expressed as a function of the prior means of [Formula: see text] and [Formula: see text], [Formula: see text], and their prior standard deviations, [Formula: see text]. We show that the "naïve" estimate of the effect size, that is, the ratio of prior means, should be down-weighted to account for the variability in the parameters. An example is presented for designing a placebo-controlled clinical trial testing the antidepressant effect of alprazolam as monotherapy for major depression. Through this method, we are able to formally integrate prior information on the uncertainty and variability of both the treatment effect and the common standard deviation into the design of the study while maintaining a frequentist framework for the final analysis. Solving for the effect size which the study has a high probability of correctly detecting based on the available prior information on the difference [Formula: see text] and the standard deviation [Formula: see text] provides a valuable, substantiated estimate that can form the basis for discussion about the study's feasibility during the design phase. © The Author(s) 2016.

  1. ZnFe2O4 nanoparticles dispersed in a highly porous silica aerogel matrix: a magnetic study.

    PubMed

    Bullita, S; Casu, A; Casula, M F; Concas, G; Congiu, F; Corrias, A; Falqui, A; Loche, D; Marras, C

    2014-03-14

    We report the detailed structural characterization and magnetic investigation of nanocrystalline zinc ferrite nanoparticles supported on a silica aerogel porous matrix which differ in size (in the range 4-11 nm) and the inversion degree (from 0.4 to 0.2) as compared to bulk zinc ferrite which has a normal spinel structure. The samples were investigated by zero-field-cooling-field-cooling, thermo-remnant DC magnetization measurements, AC magnetization investigation and Mössbauer spectroscopy. The nanocomposites are superparamagnetic at room temperature; the temperature of the superparamagnetic transition in the samples decreases with the particle size and therefore it is mainly determined by the inversion degree rather than by the particle size, which would give an opposite effect on the blocking temperature. The contribution of particle interaction to the magnetic behavior of the nanocomposites decreases significantly in the sample with the largest particle size. The values of the anisotropy constant give evidence that the anisotropy constant decreases upon increasing the particle size of the samples. All these results clearly indicate that, even when dispersed with low concentration in a non-magnetic and highly porous and insulating matrix, the zinc ferrite nanoparticles show a magnetic behavior similar to that displayed when they are unsupported or dispersed in a similar but denser matrix, and with higher loading. The effective anisotropy measured for our samples appears to be systematically higher than that measured for supported zinc ferrite nanoparticles of similar size, indicating that this effect probably occurs as a consequence of the high inversion degree.

  2. Study of structural and magnetic properties of melt spun Nd2Fe13.6Zr0.4B ingot and ribbon

    NASA Astrophysics Data System (ADS)

    Amin, Muhammad; Siddiqi, Saadat A.; Ashfaq, Ahmad; Saleem, Murtaza; Ramay, Shahid M.; Mahmood, Asif; Al-Zaghayer, Yousef S.

    2015-12-01

    Nd2Fe13.6Zr0.4B hard magnetic material were prepared using arc-melting technique on a water-cooled copper hearth kept under argon gas atmosphere. The prepared samples, Nd2Fe13.6Zr0.4B ingot and ribbon are characterized using X-ray diffraction (XRD), scanning electron microscopy (SEM) for crystal structure determination and morphological studies, respectively. The magnetic properties of the samples have been explored using vibrating sample magnetometer (VSM). The lattice constants slightly increased due to the difference in the ionic radii of Fe and that of Zr. The bulk density decreased due to smaller molar weight and low density of Zr as compared to that of Fe. Ingot sample shows almost single crystalline phase with larger crystallite sizes whereas ribbon sample shows a mixture of amorphous and crystalline phases with smaller crystallite sizes. The crystallinity of the material was highly affected with high thermal treatments. Magnetic measurements show noticeable variation in magnetic behavior with the change in crystallite size. The sample prepared in ingot type shows soft while ribbon shows hard magnetic behavior.

  3. X-ray simulations method for the large field of view

    NASA Astrophysics Data System (ADS)

    Schelokov, I. A.; Grigoriev, M. V.; Chukalina, M. V.; Asadchikov, V. E.

    2018-03-01

    In the standard approach, X-ray simulation is usually limited to the step of spatial sampling to calculate the convolution of integrals of the Fresnel type. Explicitly the sampling step is determined by the size of the last Fresnel zone in the beam aperture. In other words, the spatial sampling is determined by the precision of integral convolution calculations and is not connected with the space resolution of an optical scheme. In the developed approach the convolution in the normal space is replaced by computations of the shear strain of ambiguity function in the phase space. The spatial sampling is then determined by the space resolution of an optical scheme. The sampling step can differ in various directions because of the source anisotropy. The approach was used to simulate original images in the X-ray Talbot interferometry and showed that the simulation can be applied to optimize the methods of postprocessing.

  4. High transport efficiency of nanoparticles through a total-consumption sample introduction system and its beneficial application for particle size evaluation in single-particle ICP-MS.

    PubMed

    Miyashita, Shin-Ichi; Mitsuhashi, Hiroaki; Fujii, Shin-Ichiro; Takatsu, Akiko; Inagaki, Kazumi; Fujimoto, Toshiyuki

    2017-02-01

    In order to facilitate reliable and efficient determination of both the particle number concentration (PNC) and the size of nanoparticles (NPs) by single-particle ICP-MS (spICP-MS) without the need to correct for the particle transport efficiency (TE, a possible source of bias in the results), a total-consumption sample introduction system consisting of a large-bore, high-performance concentric nebulizer and a small-volume on-axis cylinder chamber was utilized. Such a system potentially permits a particle TE of 100 %, meaning that there is no need to include a particle TE correction when calculating the PNC and the NP size. When the particle TE through the sample introduction system was evaluated by comparing the frequency of sharp transient signals from the NPs in a measured NP standard of precisely known PNC to the particle frequency for a measured NP suspension, the TE for platinum NPs with a nominal diameter of 70 nm was found to be very high (i.e., 93 %), and showed satisfactory repeatability (relative standard deviation of 1.0 % for four consecutive measurements). These results indicated that employing this total consumption system allows the particle TE correction to be ignored when calculating the PNC. When the particle size was determined using a solution-standard-based calibration approach without an NP standard, the particle diameters of platinum and silver NPs with nominal diameters of 30-100 nm were found to agree well with the particle diameters determined by transmission electron microscopy, regardless of whether a correction was performed for the particle TE. Thus, applying the proposed system enables NP size to be accurately evaluated using a solution-standard-based calibration approach without the need to correct for the particle TE.

  5. Companions in Color: High-Resolution Imaging of Kepler’s Sub-Neptune Host Stars

    NASA Astrophysics Data System (ADS)

    Ware, Austin; Wolfgang, Angie; Kannan, Deepti

    2018-01-01

    A current problem in astronomy is determining how sub-Neptune-sized exoplanets form in planetary systems. These kinds of planets, which fall between 1 and 4 times the size of Earth, were discovered in abundance by the Kepler Mission and were typically found with relatively short orbital periods. The combination of their size and orbital period make them unusual in relation to the Solar System, leading to the question of how these exoplanets form and evolve. One possibility is that they have been influenced by distant stellar companions. To help assess the influence of these objects on the present-day, observed properties of exoplanets, we conduct a NIR search for visual stellar companions to the stars around which the Kepler Mission discovered planets. We use high-resolution images obtained with the adaptive optics systems at the Lick Observatory Shane-3m telescope to find these companion stars. Importantly, we also determine the effective brightness and distance from the planet-hosting star at which it is possible to detect these companions. Out of the 200 KOIs in our sample, 42 KOIs (21%) have visual companions within 3”, and 90 (46%) have them within 6”. These findings are consistent with recent high-resolution imaging from Furlan et al. 2017 that found at least one visual companion within 4” for 31% of sampled KOIs (37% within 4" for our sample). Our results are also complementary to Furlan et al. 2017, with only 17 visual companions commonly detected in the same filter. As for detection limits, our preliminary results indicate that we can detect companion stars < 3-5 magnitudes fainter than the planet-hosting star at a separation of ~ 1”. These detection limits will enable us to determine the probability that possible companion stars could be hidden within the noise around the planet-hosting star, an important step in determining the frequency with which these short-period, sub-Neptune-sized planets occur within binary star systems.

  6. Selective Laser Melting of Metal Powder Of Steel 3161

    NASA Astrophysics Data System (ADS)

    Smelov, V. G.; Sotov, A. V.; Agapovichev, A. V.; Tomilina, T. M.

    2016-08-01

    In this article the results of experimental study of the structure and mechanical properties of materials obtained by selective laser melting (SLM), metal powder steel 316L was carried out. Before the process of cultivation of samples as the input control, the morphology of the surface of the powder particles was studied and particle size analysis was carried out. Also, 3D X-ray quality control of the grown samples was carried out in order to detect hidden defects, their qualitative and quantitative assessment. To determine the strength characteristics of the samples synthesized by the SLM method, static tensile tests were conducted. To determine the stress X-ray diffraction analysis was carried out in the material samples.

  7. Determination of thorium by fluorescent x-ray spectrometry

    USGS Publications Warehouse

    Adler, I.; Axelrod, J.M.

    1955-01-01

    A fluorescent x-ray spectrographic method for the determination of thoria in rock samples uses thallium as an internal standard. Measurements are made with a two-channel spectrometer equipped with quartz (d = 1.817 A.) analyzing crystals. Particle-size effects are minimized by grinding the sample components with a mixture of silicon carbide and aluminum and then briquetting. Analyses of 17 samples showed that for the 16 samples containing over 0.7% thoria the average error, based on chemical results, is 4.7% and the maximum error, 9.5%. Because of limitations of instrumentation, 0.2% thoria is considered the lower limit of detection. An analysis can be made in about an hour.

  8. Determination of lateral size distribution of type-II ZnTe/ZnSe stacked submonolayer quantum dots via spectral analysis of optical signature of the Aharanov-Bohm excitons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Haojie; Dhomkar, Siddharth; Roy, Bidisha

    2014-10-28

    For submonolayer quantum dot (QD) based photonic devices, size and density of QDs are critical parameters, the probing of which requires indirect methods. We report the determination of lateral size distribution of type-II ZnTe/ZnSe stacked submonolayer QDs, based on spectral analysis of the optical signature of Aharanov-Bohm (AB) excitons, complemented by photoluminescence studies, secondary-ion mass spectroscopy, and numerical calculations. Numerical calculations are employed to determine the AB transition magnetic field as a function of the type-II QD radius. The study of four samples grown with different tellurium fluxes shows that the lateral size of QDs increases by just 50%, evenmore » though tellurium concentration increases 25-fold. Detailed spectral analysis of the emission of the AB exciton shows that the QD radii take on only certain values due to vertical correlation and the stacked nature of the QDs.« less

  9. Segmented polynomial taper equation incorporating years since thinning for loblolly pine plantations

    Treesearch

    A. Gordon Holley; Thomas B. Lynch; Charles T. Stiff; William Stansfield

    2010-01-01

    Data from 108 trees felled from 16 loblolly pine stands owned by Temple-Inland Forest Products Corp. were used to determine effects of years since thinning (YST) on stem taper using the Max–Burkhart type segmented polynomial taper model. Sample tree YST ranged from two to nine years prior to destructive sampling. In an effort to equalize sample sizes, tree data were...

  10. Advanced functional materials in solid phase extraction for ICP-MS determination of trace elements and their species - A review.

    PubMed

    He, Man; Huang, Lijin; Zhao, Bingshan; Chen, Beibei; Hu, Bin

    2017-06-22

    For the determination of trace elements and their species in various real samples by inductively coupled plasma mass spectrometry (ICP-MS), solid phase extraction (SPE) is a commonly used sample pretreatment technique to remove complex matrix, pre-concentrate target analytes and make the samples suitable for subsequent sample introduction and measurements. The sensitivity, selectivity/anti-interference ability, sample throughput and application potential of the methodology of SPE-ICP-MS are greatly dependent on SPE adsorbents. This article presents a general overview of the use of advanced functional materials (AFMs) in SPE for ICP-MS determination of trace elements and their species in the past decade. Herein the AFMs refer to the materials featuring with high adsorption capacity, good selectivity, fast adsorption/desorption dynamics and satisfying special requirements in real sample analysis, including nanometer-sized materials, porous materials, ion imprinting polymers, restricted access materials and magnetic materials. Carbon/silica/metal/metal oxide nanometer-sized adsorbents with high surface area and plenty of adsorption sites exhibit high adsorption capacity, and porous adsorbents would provide more adsorption sites and faster adsorption dynamics. The selectivity of the materials for target elements/species can be improved by using physical/chemical modification, ion imprinting and restricted accessed technique. Magnetic adsorbents in conventional batch operation offer unique magnetic response and high surface area-volume ratio which provide a very easy phase separation, greater extraction capacity and efficiency over conventional adsorbents, and chip-based magnetic SPE provides a versatile platform for special requirement (e.g. cell analysis). The performance of these adsorbents for the determination of trace elements and their species in different matrices by ICP-MS is discussed in detail, along with perspectives and possible challenges in the future development. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Method to determine 226Ra in small sediment samples by ultralow background liquid scintillation.

    PubMed

    Sanchez-Cabeza, Joan-Albert; Kwong, Laval Liong Wee; Betti, Maria

    2010-08-15

    (210)Pb dating of sediment cores is a widely used tool to reconstruct ecosystem evolution and historical pollution during the last century. Although (226)Ra can be determined by gamma spectrometry, this method shows severe limitations which are, among others, sample size requirements and counting times. In this work, we propose a new strategy based on the analysis of (210)Pb through (210)Po in equilibrium by alpha spectrometry, followed by the determination of (226)Ra (base or supported (210)Pb) without any further chemical purification by liquid scintillation and with a higher sample throughput. Although gamma spectrometry might still be required to determine (137)Cs as an independent tracer, the effort can then be focused only on those sections dated around 1963, when maximum activities are expected. In this work, we optimized the counting conditions, calibrated the system for changing quenching, and described the new method to determine (226)Ra in small sediment samples, after (210)Po determination, allowing a more precise determination of excess (210)Pb ((210)Pb(ex)). The method was validated with reference materials IAEA-384, IAEA-385, and IAEA-313.

  12. Analysis of Longitudinal Outcome Data with Missing Values in Total Knee Arthroplasty.

    PubMed

    Kang, Yeon Gwi; Lee, Jang Taek; Kang, Jong Yeal; Kim, Ga Hye; Kim, Tae Kyun

    2016-01-01

    We sought to determine the influence of missing data on the statistical results, and to determine which statistical method is most appropriate for the analysis of longitudinal outcome data of TKA with missing values among repeated measures ANOVA, generalized estimating equation (GEE) and mixed effects model repeated measures (MMRM). Data sets with missing values were generated with different proportion of missing data, sample size and missing-data generation mechanism. Each data set was analyzed with three statistical methods. The influence of missing data was greater with higher proportion of missing data and smaller sample size. MMRM tended to show least changes in the statistics. When missing values were generated by 'missing not at random' mechanism, no statistical methods could fully avoid deviations in the results. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. [A comparison of convenience sampling and purposive sampling].

    PubMed

    Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien

    2014-06-01

    Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.

  14. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    PubMed

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  15. Domain-wall excitations in the two-dimensional Ising spin glass

    NASA Astrophysics Data System (ADS)

    Khoshbakht, Hamid; Weigel, Martin

    2018-02-01

    The Ising spin glass in two dimensions exhibits rich behavior with subtle differences in the scaling for different coupling distributions. We use recently developed mappings to graph-theoretic problems together with highly efficient implementations of combinatorial optimization algorithms to determine exact ground states for systems on square lattices with up to 10 000 ×10 000 spins. While these mappings only work for planar graphs, for example for systems with periodic boundary conditions in at most one direction, we suggest here an iterative windowing technique that allows one to determine ground states for fully periodic samples up to sizes similar to those for the open-periodic case. Based on these techniques, a large number of disorder samples are used together with a careful finite-size scaling analysis to determine the stiffness exponents and domain-wall fractal dimensions with unprecedented accuracy, our best estimates being θ =-0.2793 (3 ) and df=1.273 19 (9 ) for Gaussian couplings. For bimodal disorder, a new uniform sampling algorithm allows us to study the domain-wall fractal dimension, finding df=1.279 (2 ) . Additionally, we also investigate the distributions of ground-state energies, of domain-wall energies, and domain-wall lengths.

  16. Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.

    PubMed

    Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E

    2014-02-28

    The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Particle size analysis of lamb meat: Effect of homogenization speed, comparison with myofibrillar fragmentation index and its relationship with shear force.

    PubMed

    Karumendu, L U; Ven, R van de; Kerr, M J; Lanza, M; Hopkins, D L

    2009-08-01

    The impact of homogenization speed on Particle Size (PS) results was examined using samples from the M.longissimus thoracis et lumborum (LL) of 40 lambs. One gram duplicate samples from meat aged for 1 and 5days were homogenized at five different speeds; 11,000, 13,000, 16,000, 19,000 and 22,000rpm. In addition to this LL samples from 30 different lamb carcases also aged for 1 and 5days were used to study the comparison between PS and myofibrillar fragmentation index (MFI) values. In this case, 1g duplicate samples (n=30) were homogenized at 16,000rpm and the other half (0.5g samples) at 11,000rpm (n=30). The homogenates were then subjected to respective combinations of treatments which included either PS analysis or the determination of MFI, both with or without three cycles of centrifugation. All 140 samples of LL included 65g blocks for subsequent shear force (SF) testing. Homogenization at 16,000rpm provided the greatest ability to detect ageing differences for particle size between samples aged for 1 and 5days. Particle size at the 25% quantile provided the best result for detecting differences due to ageing. It was observed that as ageing increased the mean PS decreased and was significantly (P<0.001) less for 5days aged samples compared to 1day aged samples, while MFI values significantly increased (P<0.001) as ageing period increased. When comparing the PS and MFI methods it became apparent that, as opposed to the MFI method, there was a greater coefficient of variation for the PS method which warranted a quality assurance system. Given this requirement and examination of the mean, standard deviation and the 25% quantile for PS data it was concluded that three cycles of centrifugation were not necessary and this also applied to the MFI method. There were significant correlations (P<0.001) within the same lamb loin sample aged for a given period between mean MFI and mean PS (-0.53), mean MFI and mean SF (-0.38) and mean PS and mean SF (0.23). It was concluded that PS analysis offers significant potential for streamlining determination of myofibrillar degradation when samples are measured after homogenization at 16,000rpm with no centrifugation.

  18. 78 FR 27406 - Agency Information Collection Activities; Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-10

    ... groups will be conducted with up to eight participants in each for a total sample size of 32. The second... determine eligibility for the pilot study to recruit a sample of 500 participants (50 from each clinical... participate in an in-depth, qualitative telephone interview for a total of 100 interviews. Finally, up to...

  19. Illiteracy, Sex and Occupational Status in Present-Day China.

    ERIC Educational Resources Information Center

    Lamontagne, Jacques

    This study determined the magnitude of disparity between men and women in China in relation to illiteracy and occupational status. Region and ethnicity are used as control variables. The data collected are from a 10 percent sampling of the 1982 census; the total sample size includes a population of 100,380,000 nationwide. The census questionnaire…

  20. Model Choice and Sample Size in Item Response Theory Analysis of Aphasia Tests

    ERIC Educational Resources Information Center

    Hula, William D.; Fergadiotis, Gerasimos; Martin, Nadine

    2012-01-01

    Purpose: The purpose of this study was to identify the most appropriate item response theory (IRT) measurement model for aphasia tests requiring 2-choice responses and to determine whether small samples are adequate for estimating such models. Method: Pyramids and Palm Trees (Howard & Patterson, 1992) test data that had been collected from…

  1. Imaging natural materials with a quasi-microscope. [spectrophotometry of granular materials

    NASA Technical Reports Server (NTRS)

    Bragg, S.; Arvidson, R.

    1977-01-01

    A Viking lander camera with auxilliary optics mounted inside the dust post was evaluated to determine its capability for imaging the inorganic properties of granular materials. During mission operations, prepared samples would be delivered to a plate positioned within the camera's field of view and depth of focus. The auxiliary optics would then allow soil samples to be imaged with an 11 pm pixel size in the broad band (high resolution, black and white) mode, and a 33 pm pixel size in the multispectral mode. The equipment will be used to characterize: (1) the size distribution of grains produced by igneous (intrusive and extrusive) processes or by shock metamorphism, (2) the size distribution resulting from crushing, chemical alteration, or by hydraulic or aerodynamic sorting; (3) the shape and degree of grain roundness and surface texture induced by mechanical and chemical alteration; and (4) the mineralogy and chemistry of grains.

  2. Passive vs. Parachute System Architecture for Robotic Sample Return Vehicles

    NASA Technical Reports Server (NTRS)

    Maddock, Robert W.; Henning, Allen B.; Samareh, Jamshid A.

    2016-01-01

    The Multi-Mission Earth Entry Vehicle (MMEEV) is a flexible vehicle concept based on the Mars Sample Return (MSR) EEV design which can be used in the preliminary sample return mission study phase to parametrically investigate any trade space of interest to determine the best entry vehicle design approach for that particular mission concept. In addition to the trade space dimensions often considered (e.g. entry conditions, payload size and mass, vehicle size, etc.), the MMEEV trade space considers whether it might be more beneficial for the vehicle to utilize a parachute system during descent/landing or to be fully passive (i.e. not use a parachute). In order to evaluate this trade space dimension, a simplified parachute system model has been developed based on inputs such as vehicle size/mass, payload size/mass and landing requirements. This model works in conjunction with analytical approximations of a mission trade space dataset provided by the MMEEV System Analysis for Planetary EDL (M-SAPE) tool to help quantify the differences between an active (with parachute) and a passive (no parachute) vehicle concept.

  3. Surface-sediment grain-size distribution and sediment transport in the subaqueous Mekong Delta, Vietnam

    NASA Astrophysics Data System (ADS)

    Nguyen, T. T.; Stattegger, K.; Nittrouer, C.; Phung, P. V.; Liu, P.; DeMaster, D. J.; Bui, D. V.; Le, A. D.; Nguyen, T. N.

    2016-02-01

    Collected surface-sediment samples in coastal water around Mekong Delta (from distributary channels to Ca Mau Peninsula) were analyzed to determine surface-sediment grain-size distribution and sediment-transport trend in the subaqueous Mekong Delta. The grain-size data set of 238 samples was obtained by using the laser instrument Mastersizer 2000 and LS Particle Size Analyzer. Fourteen samples were selected for geochemical analysis (total-organic and carbonate content). These geochemical results were used to assist in interpreting variations of granulometricparamenters along the cross-shore transects. Nine transects were examined from CungHau river mouth to Ca Mau Peninsula and six thematic maps on the whole study area were made. The research results indicate that: (1) generally, the sediment becomes finer from the delta front downwards to prodelta and becomes coarser again and poorer sorted on the adjacent inner shelf due to different sources of sediment; (2) sediment-granulometry parameters vary among sedimentary sub-environments of the underwater part of Mekong Delta, the distance from sediment source and hydrodynamic regime controlling each region; (3) the net sediment transport is southwest toward the Ca Mau Peninsula.

  4. A simple method for the analysis of particle sizes of forage and total mixed rations.

    PubMed

    Lammers, B P; Buckmaster, D R; Heinrichs, A J

    1996-05-01

    A simple separator was developed to determine the particle sizes of forage and TMR that allows for easy separation of wet forage into three fractions and also allows plotting of the particle size distribution. The device was designed to mimic the laboratory-scale separator for forage particle sizes that was specified by Standard S424 of the American Society of Agricultural Engineers. A comparison of results using the standard device and the newly developed separator indicated no difference in ability to predict fractions of particles with maximum length of less than 8 and 19 mm. The separator requires a small quantity of sample (1.4 L) and is manually operated. The materials on the screens and bottom pan were weighed to obtain the cumulative percentage of sample that was undersize for the two fractions. The results were then plotted using the Weibull distribution, which proved to be the best fit for the data. Convenience samples of haycrop silage, corn silage, and TMR from farms in the northeastern US were analyzed using the forage and TMR separator, and the range of observed values are given.

  5. Glass frit nebulizer for atomic spectrometry

    USGS Publications Warehouse

    Layman, L.R.

    1982-01-01

    The nebuilizatlon of sample solutions Is a critical step In most flame or plasma atomic spectrometrlc methods. A novel nebulzatlon technique, based on a porous glass frit, has been Investigated. Basic operating parameters and characteristics have been studied to determine how thte new nebulizer may be applied to atomic spectrometrlc methods. The results of preliminary comparisons with pneumatic nebulizers Indicate several notable differences. The frit nebulizer produces a smaller droplet size distribution and has a higher sample transport efficiency. The mean droplet size te approximately 0.1 ??m, and up to 94% of the sample te converted to usable aerosol. The most significant limitations In the performance of the frit nebulizer are the stow sample equMbratton time and the requirement for wash cycles between samples. Loss of solute by surface adsorption and contamination of samples by leaching from the glass were both found to be limitations only In unusual cases. This nebulizer shows great promise where sample volume te limited or where measurements require long nebullzatlon times.

  6. Researchers’ Intuitions About Power in Psychological Research

    PubMed Central

    Bakker, Marjan; Hartgerink, Chris H. J.; Wicherts, Jelte M.; van der Maas, Han L. J.

    2016-01-01

    Many psychology studies are statistically underpowered. In part, this may be because many researchers rely on intuition, rules of thumb, and prior practice (along with practical considerations) to determine the number of subjects to test. In Study 1, we surveyed 291 published research psychologists and found large discrepancies between their reports of their preferred amount of power and the actual power of their studies (calculated from their reported typical cell size, typical effect size, and acceptable alpha). Furthermore, in Study 2, 89% of the 214 respondents overestimated the power of specific research designs with a small expected effect size, and 95% underestimated the sample size needed to obtain .80 power for detecting a small effect. Neither researchers’ experience nor their knowledge predicted the bias in their self-reported power intuitions. Because many respondents reported that they based their sample sizes on rules of thumb or common practice in the field, we recommend that researchers conduct and report formal power analyses for their studies. PMID:27354203

  7. Temperature dependence of the size distribution function of InAs quantum dots on GaAs(001)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arciprete, F.; Fanfoni, M.; Patella, F.

    2010-04-15

    We present a detailed atomic-force-microscopy study of the effect of annealing on InAs/GaAs(001) quantum dots grown by molecular-beam epitaxy. Samples were grown at a low growth rate at 500 deg. C with an InAs coverage slightly greater than critical thickness and subsequently annealed at several temperatures. We find that immediately quenched samples exhibit a bimodal size distribution with a high density of small dots (<50 nm{sup 3}) while annealing at temperatures greater than 420 deg. C leads to a unimodal size distribution. This result indicates a coarsening process governing the evolution of the island size distribution function which is limitedmore » by the attachment-detachment of the adatoms at the island boundary. At higher temperatures one cannot ascribe a single rate-determining step for coarsening because of the increased role of adatom diffusion. However, for long annealing times at 500 deg. C the island size distribution is strongly affected by In desorption.« less

  8. Researchers' Intuitions About Power in Psychological Research.

    PubMed

    Bakker, Marjan; Hartgerink, Chris H J; Wicherts, Jelte M; van der Maas, Han L J

    2016-08-01

    Many psychology studies are statistically underpowered. In part, this may be because many researchers rely on intuition, rules of thumb, and prior practice (along with practical considerations) to determine the number of subjects to test. In Study 1, we surveyed 291 published research psychologists and found large discrepancies between their reports of their preferred amount of power and the actual power of their studies (calculated from their reported typical cell size, typical effect size, and acceptable alpha). Furthermore, in Study 2, 89% of the 214 respondents overestimated the power of specific research designs with a small expected effect size, and 95% underestimated the sample size needed to obtain .80 power for detecting a small effect. Neither researchers' experience nor their knowledge predicted the bias in their self-reported power intuitions. Because many respondents reported that they based their sample sizes on rules of thumb or common practice in the field, we recommend that researchers conduct and report formal power analyses for their studies. © The Author(s) 2016.

  9. Speckle size in optical Fourier domain imaging

    NASA Astrophysics Data System (ADS)

    Lamouche, G.; Vergnole, S.; Bisaillon, C.-E.; Dufour, M.; Maciejko, R.; Monchalin, J.-P.

    2007-06-01

    As in conventional time-domain optical coherence tomography (OCT), speckle is inherent to any Optical Fourier Domain Imaging (OFDI) of biological tissue. OFDI is also known as swept-source OCT (SS-OCT). The axial speckle size is mainly determined by the OCT resolution length and the transverse speckle size by the focusing optics illuminating the sample. There is also a contribution from the sample related to the number of scatterers contained within the probed volume. In the OFDI data processing, there is some liberty in selecting the range of wavelengths used and this allows variation in the OCT resolution length. Consequently the probed volume can be varied. By performing measurements on an optical phantom with a controlled density of discrete scatterers and by changing the probed volume with different range of wavelengths in the OFDI data processing, there is an obvious change in the axial speckle size, but we show that there is also a less obvious variation in the transverse speckle size. This work contributes to a better understanding of speckle in OCT.

  10. The Risk of Adverse Impact in Selections Based on a Test with Known Effect Size

    ERIC Educational Resources Information Center

    De Corte, Wilfried; Lievens, Filip

    2005-01-01

    The authors derive the exact sampling distribution function of the adverse impact (AI) ratio for single-stage, top-down selections using tests with known effect sizes. Subsequently, it is shown how this distribution function can be used to determine the risk that a future selection decision on the basis of such tests will result in an outcome that…

  11. Self-assembled indium arsenide quantum dots: Structure, formation dynamics, optical properties

    NASA Astrophysics Data System (ADS)

    Lee, Hao

    1998-12-01

    In this dissertation, we investigate the properties of InAs/GaAs quantum dots grown by molecular beam epitaxy. The structure and formation dynamics of InAs quantum dots are studied by a variety of structural characterization techniques. Correlations among the growth conditions, the structural characteristics, and the observed optical properties are explored. The most fundamental structural characteristic of the InAs quantum dots is their shape. Through detailed study of the reflection high energy electron diffraction patterns, we determined that self-assembled InAs islands possess a pyramidal shape with 136 bounding facets. Cross-sectional transmission electron microscopy images and atomic force microscopy images strongly support this model. The 136 model we proposed is the first model that is consistent with all reported shape features determined using different methods. The dynamics of coherent island formation is also studied with the goal of establishing the factors most important in determining the size, density, and the shape of self- organized InAs quantum dots. Our studies clearly demonstrate the roles that indium diffusion and desorption play in InAs island formation. An unexpected finding (from atomic force microscopy images) was that the island size distribution bifurcated during post- growth annealing. Photoluminescence spectra of the samples subjected to in-situ annealing prior to the growth of a capping layer show a distinctive double-peak feature. The power-dependence and temperature-dependence of the photoluminescence spectra reveals that the double- peak emission is associated with the ground-state transition of islands in two different size branches. These results confirm the island size bifurcation observed from atomic force microscopy images. The island size bifurcation provides a new approach to the control and manipulation of the island size distribution. Unexpected dependence of the photoluminescence line-shape on sample temperature and pump intensity was observed for samples grown at relatively high substrate temperatures. The behavior is modeled and explained in terms of competition between two overlapping transitions. The study underscores that the growth conditions can have a dramatic impact on the optical properties of the quantum dots. This dissertation includes both my previously published and unpublished authored materials.

  12. A Naturalistic Study of Driving Behavior in Older Adults and Preclinical Alzheimer Disease.

    PubMed

    Babulal, Ganesh M; Stout, Sarah H; Benzinger, Tammie L S; Ott, Brian R; Carr, David B; Webb, Mollie; Traub, Cindy M; Addison, Aaron; Morris, John C; Warren, David K; Roe, Catherine M

    2017-01-01

    A clinical consequence of symptomatic Alzheimer's disease (AD) is impaired driving performance. However, decline in driving performance may begin in the preclinical stage of AD. We used a naturalistic driving methodology to examine differences in driving behavior over one year in a small sample of cognitively normal older adults with ( n = 10) and without ( n = 10) preclinical AD. As expected with a small sample size, there were no statistically significant differences between the two groups, but older adults with preclinical AD drove less often, were less likely to drive at night, and had fewer aggressive behaviors such as hard braking, speeding, and sudden acceleration. The sample size required to power a larger study to determine differences was calculated.

  13. Around Marshall

    NASA Image and Video Library

    1996-06-10

    The dart and associated launching system was developed by engineers at MSFC to collect a sample of the aluminum oxide particles during the static fire testing of the Shuttle's solid rocket motor. The dart is launched through the exhaust and recovered post test. The particles are collected on sticky copper tapes affixed to a cylindrical shaft in the dart. A protective sleeve draws over the tape after the sample is collected to prevent contamination. The sample is analyzed under a scarning electron microscope under high magnification and a particle size distribution is determined. This size distribution is input into the analytical model to predict the radiative heating rates from the motor exhaust. Good prediction models are essential to optimizing the development of the thermal protection system for the Shuttle.

  14. Aeromechanics and Vehicle Configuration Demonstrations. Volume 2: Understanding Vehicle Sizing, Aeromechanics and Configuration Trades, Risks, and Issues for Next-Generations Access to Space Vehicles

    DTIC Science & Technology

    2014-01-01

    and proportional correctors. The weighting function evaluates nearby data samples to determine the utility of each correction style , eliminating the...sparse methods may be of use. As for other multi-fidelity techniques, true cokriging in the style described by geo-statisticians[93] is beyond the...sampling style between sampling points predicted to fall near the contour and sampling points predicted to be farther from the contour but with

  15. Measuring Shock Stage of ltokawa Regolith Grains by Electron Back-Scattered Diffraction and Synchrotron X-Ray Diffraction

    NASA Technical Reports Server (NTRS)

    Zolensky, Michael; Mikouchi, Takashi; Hagiya, Kenji; Ohsumi, Kazumasa; Martinez, James; Hagiya, Kenji; Sitzman, Scott; Terada, Yasuko; Yagi, Naoto; Komatsu, Mutsumi; hide

    2017-01-01

    We have been analyzing Itokawa samples in order to definitively establish the degree of shock experienced by the regolith of asteroid Itokawa, and to devise a bridge between shock determinations by standard light optical petrography, crystal structures as determined by electron and X-ray diffraction techniques. We are making measurements of olivine crystal structures and using these to elucidate critical regolith impact processes. We use electron back-scattered diffraction (EBSD) and synchrotron X-ray diffraction (SXRD). We are comparing the Itokawa samples to L and LL chondrite meteorites chosen to span the shock scale experienced by Itokawa, specifically Chainpur (LL3.4, Shock Stage 1), Semarkona (LL3.00, S2), Kilabo (LL6, S3), NWA100 (L6, S4) and Chelyabinsk (LL5, S4). In SXRD we measure the line broadening of olivine reflections as a measure of shock stage. In this presentation we concentrate on the EBSD work. We employed JSC's Supra 55 variable pressure FEG-SEM and Bruker EBSD system. We are not seeking actual strain values, but rather indirect strain-related measurements such as extent of intra-grain lattice rotation, and determining whether shock state "standards" (meteorite samples of accepted shock state, and appropriate small grain size) show strain measurements that may be statistically differentiated, using a sampling of particles (number and size range) typical of asteroid regoliths. Using our system we determined that a column pressure of 9 Pa and no C-coating on the sample was optimal. We varied camera exposure time and gain to optimize mapping performance, concluding that 320x240 pattern pixilation, frame averaging of 3, 15 kV, and low extractor voltage yielded an acceptable balance of hit rate (>90%), speed (11 fps) and map quality using an exposure time of 30 ms (gain 650). We found that there was no strong effect of step size on Grain Orientation Spread (GOS) and Grain Reference Orientation Deviation angle (GROD-a) distribution; there was some effect on grain average Kernel Average Misorientation (KAM) (reduced with smaller step size for the same grain), as expected. We monitored GOS, Maximum Orientation Spread (MOS) and GROD-a differences between whole olivine grains and sub-sampled areas, and found that there were significant differences between the whole grain dataset and subsets, as well as between subsets, likely due to sampling-related "noise". Also, in general (and logically) whole grains exhibit greater degrees of cumulative lattice rotation. Sampling size affects the apparent strain character of the grain, at least as measured by GOS, MOS and GROD-a. There were differences in the distribution frequencies of GOS and MOS between shock stages, and in plots of MOS and GOS vs. grain diameter. These results are generally consistent with those reported this year. However, it is unknown whether the differences between samples of different shock states exceeds the clustering of these values to the extent that shock stage determinations can still be made with confidence. We are investigating this by examination of meteorites with higher shock stage 4 to 5. Our research will improve our understanding of how small, primitive solar system bodies formed and evolved, and improve understanding of the processes that determine the history and future of habitability of environments on other solar system bodies. The results will directly enrich the ongoing asteroid and comet exploration missions by NASA and JAXA, and broaden our understanding of the origin and evolution of small bodies in the early solar system, and elucidate the nature of asteroid and comet regolith.

  16. Quantitative characterisation of sedimentary grains

    NASA Astrophysics Data System (ADS)

    Tunwal, Mohit; Mulchrone, Kieran F.; Meere, Patrick A.

    2016-04-01

    Analysis of sedimentary texture helps in determining the formation, transportation and deposition processes of sedimentary rocks. Grain size analysis is traditionally quantitative, whereas grain shape analysis is largely qualitative. A semi-automated approach to quantitatively analyse shape and size of sand sized sedimentary grains is presented. Grain boundaries are manually traced from thin section microphotographs in the case of lithified samples and are automatically identified in the case of loose sediments. Shape and size paramters can then be estimated using a software package written on the Mathematica platform. While automated methodology already exists for loose sediment analysis, the available techniques for the case of lithified samples are limited to cases of high definition thin section microphotographs showing clear contrast between framework grains and matrix. Along with the size of grain, shape parameters such as roundness, angularity, circularity, irregularity and fractal dimension are measured. A new grain shape parameter developed using Fourier descriptors has also been developed. To test this new approach theoretical examples were analysed and produce high quality results supporting the accuracy of the algorithm. Furthermore sandstone samples from known aeolian and fluvial environments from the Dingle Basin, County Kerry, Ireland were collected and analysed. Modern loose sediments from glacial till from County Cork, Ireland and aeolian sediments from Rajasthan, India have also been collected and analysed. A graphical summary of the data is presented and allows for quantitative distinction between samples extracted from different sedimentary environments.

  17. Emission factors for PM2.5, CO, CO2, NOx, SO2 and particle size distributions from the combustion of wood species using a new controlled combustion chamber 3CE.

    PubMed

    Cereceda-Balic, Francisco; Toledo, Mario; Vidal, Victor; Guerrero, Fabian; Diaz-Robles, Luis A; Petit-Breuilh, Ximena; Lapuerta, Magin

    2017-04-15

    The objective of this research was to determine emission factors (EF) for particulate matter (PM 2.5 ), combustion gases and particle size distribution generated by the combustion of Eucalyptus globulus (EG), Nothofagus obliqua (NO), both hardwoods, and Pinus radiata (PR), softwood, using a controlled combustion chamber (3CE). Additionally, the contribution of the different emissions stages associated with the combustion of these wood samples was also determined. Combustion experiments were performed using shaving size dried wood (0% humidity). The emission samples were collected with a tedlar bag and sampling cartridges containing quartz fiber filters. High reproducibility was achieved between experiment repetitions (CV<10%, n=3). The EF for PM 2.5 was 1.06gkg -1 for EG, 1.33gkg -1 for NO, and 0.84gkg -1 for PR. Using a laser aerosol spectrometer (0.25-34μm), the contribution of particle emissions (PM 2.5 ) in each stage of emission process (SEP) was sampled in real time. Particle size of 0.265μm were predominant during all stages, and the percentages emitted were PR (33%), EG (29%), and NO (21%). The distributions of EF for PM 2.5 in pre-ignition, flame and smoldering stage varied from predominance of the flame stage for PR (77%) to predominance of the smoldering stage for NO (60%). These results prove that flame phase is not the only stage contributing to emissions and on the contrary, pre-ignition and in especial post-combustion smoldering have also very significant contributions. This demonstrates that particle concentrations measured only in stationary state during flame stage may cause underestimation of emissions. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. An In Situ Method for Sizing Insoluble Residues in Precipitation and Other Aqueous Samples

    PubMed Central

    Axson, Jessica L.; Creamean, Jessie M.; Bondy, Amy L.; Capracotta, Sonja S.; Warner, Katy Y.; Ault, Andrew P.

    2015-01-01

    Particles are frequently incorporated into clouds or precipitation, influencing climate by acting as cloud condensation or ice nuclei, taking up coatings during cloud processing, and removing species through wet deposition. Many of these particles, particularly ice nuclei, can remain suspended within cloud droplets/crystals as insoluble residues. While previous studies have measured the soluble or bulk mass of species within clouds and precipitation, no studies to date have determined the number concentration and size distribution of insoluble residues in precipitation or cloud water using in situ methods. Herein, for the first time we demonstrate that Nanoparticle Tracking Analysis (NTA) is a powerful in situ method for determining the total number concentration, number size distribution, and surface area distribution of insoluble residues in precipitation, both of rain and melted snow. The method uses 500 μL or less of liquid sample and does not require sample modification. Number concentrations for the insoluble residues in aqueous precipitation samples ranged from 2.0–3.0(±0.3)×108 particles cm−3, while surface area ranged from 1.8(±0.7)–3.2(±1.0)×107 μm2 cm−3. Number size distributions peaked between 133–150 nm, with both single and multi-modal character, while surface area distributions peaked between 173–270 nm. Comparison with electron microscopy of particles up to 10 μm show that, by number, > 97% residues are <1 μm in diameter, the upper limit of the NTA. The range of concentration and distribution properties indicates that insoluble residue properties vary with ambient aerosol concentrations, cloud microphysics, and meteorological dynamics. NTA has great potential for studying the role that insoluble residues play in critical atmospheric processes. PMID:25705069

  19. At convenience and systematic random sampling: effects on the prognostic value of nuclear area assessments in breast cancer patients.

    PubMed

    Jannink, I; Bennen, J N; Blaauw, J; van Diest, P J; Baak, J P

    1995-01-01

    This study compares the influence of two different nuclear sampling methods on the prognostic value of assessments of mean and standard deviation of nuclear area (MNA, SDNA) in 191 consecutive invasive breast cancer patients with long term follow up. The first sampling method used was 'at convenience' sampling (ACS); the second, systematic random sampling (SRS). Both sampling methods were tested with a sample size of 50 nuclei (ACS-50 and SRS-50). To determine whether, besides the sampling methods, sample size had impact on prognostic value as well, the SRS method was also tested using a sample size of 100 nuclei (SRS-100). SDNA values were systematically lower for ACS, obviously due to (unconsciously) not including small and large nuclei. Testing prognostic value of a series of cut off points, MNA and SDNA values assessed by the SRS method were prognostically significantly stronger than the values obtained by the ACS method. This was confirmed in Cox regression analysis. For the MNA, the Mantel-Cox p-values from SRS-50 and SRS-100 measurements were not significantly different. However, for the SDNA, SRS-100 yielded significantly lower p-values than SRS-50. In conclusion, compared with the 'at convenience' nuclear sampling method, systematic random sampling of nuclei is not only superior with respect to reproducibility of results, but also provides a better prognostic value in patients with invasive breast cancer.

  20. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    USGS Publications Warehouse

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  1. Quantitative flaw characterization with scanning laser acoustic microscopy

    NASA Technical Reports Server (NTRS)

    Generazio, E. R.; Roth, D. J.

    1986-01-01

    Surface roughness and diffraction are two factors that have been observed to affect the accuracy of flaw characterization with scanning laser acoustic microscopy. In accuracies can arise when the surface of the test sample is acoustically rough. It is shown that, in this case, Snell's law is no longer valid for determining the direction of sound propagation within the sample. The relationship between the direction of sound propagation within the sample, the apparent flaw depth, and the sample's surface roughness is investigated. Diffraction effects can mask the acoustic images of minute flaws and make it difficult to establish their size, depth, and other characteristics. It is shown that for Fraunhofer diffraction conditions the acoustic image of a subsurface defect corresponds to a two-dimensional Fourier transform. Transforms based on simulated flaws are used to infer the size and shape of the actual flaw.

  2. Forest inventory using multistage sampling with probability proportional to size. [Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Lee, D. C. L.; Hernandezfilho, P.; Shimabukuro, Y. E.; Deassis, O. R.; Demedeiros, J. S.

    1984-01-01

    A multistage sampling technique, with probability proportional to size, for forest volume inventory using remote sensing data is developed and evaluated. The study area is located in the Southeastern Brazil. The LANDSAT 4 digital data of the study area are used in the first stage for automatic classification of reforested areas. Four classes of pine and eucalypt with different tree volumes are classified utilizing a maximum likelihood classification algorithm. Color infrared aerial photographs are utilized in the second stage of sampling. In the third state (ground level) the time volume of each class is determined. The total time volume of each class is expanded through a statistical procedure taking into account all the three stages of sampling. This procedure results in an accurate time volume estimate with a smaller number of aerial photographs and reduced time in field work.

  3. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  4. Rapid and non-invasive analysis of deoxynivalenol in durum and common wheat by Fourier-Transform Near Infrared (FT-NIR) spectroscopy.

    PubMed

    De Girolamo, A; Lippolis, V; Nordkvist, E; Visconti, A

    2009-06-01

    Fourier transform near-infrared spectroscopy (FT-NIR) was used for rapid and non-invasive analysis of deoxynivalenol (DON) in durum and common wheat. The relevance of using ground wheat samples with a homogeneous particle size distribution to minimize measurement variations and avoid DON segregation among particles of different sizes was established. Calibration models for durum wheat, common wheat and durum + common wheat samples, with particle size <500 microm, were obtained by using partial least squares (PLS) regression with an external validation technique. Values of root mean square error of prediction (RMSEP, 306-379 microg kg(-1)) were comparable and not too far from values of root mean square error of cross-validation (RMSECV, 470-555 microg kg(-1)). Coefficients of determination (r(2)) indicated an "approximate to good" level of prediction of the DON content by FT-NIR spectroscopy in the PLS calibration models (r(2) = 0.71-0.83), and a "good" discrimination between low and high DON contents in the PLS validation models (r(2) = 0.58-0.63). A "limited to good" practical utility of the models was ascertained by range error ratio (RER) values higher than 6. A qualitative model, based on 197 calibration samples, was developed to discriminate between blank and naturally contaminated wheat samples by setting a cut-off at 300 microg kg(-1) DON to separate the two classes. The model correctly classified 69% of the 65 validation samples with most misclassified samples (16 of 20) showing DON contamination levels quite close to the cut-off level. These findings suggest that FT-NIR analysis is suitable for the determination of DON in unprocessed wheat at levels far below the maximum permitted limits set by the European Commission.

  5. Probing defects in chemically synthesized ZnO nanostrucures by positron annihilation and photoluminescence spectroscopy

    NASA Astrophysics Data System (ADS)

    Chaudhuri, S. K.; Ghosh, Manoranjan; Das, D.; Raychaudhuri, A. K.

    2010-09-01

    The present article describes the size induced changes in the structural arrangement of intrinsic defects present in chemically synthesized ZnO nanoparticles of various sizes. Routine x-ray diffraction and transmission electron microscopy have been performed to determine the shapes and sizes of the nanocrystalline ZnO samples. Detailed studies using positron annihilation spectroscopy reveals the presence of zinc vacancy. Whereas analysis of photoluminescence results predict the signature of charged oxygen vacancies. The size induced changes in positron parameters as well as the photoluminescence properties, has shown contrasting or nonmonotonous trends as size varies from 4 to 85 nm. Small spherical particles below a critical size (˜23 nm) receive more positive surface charge due to the higher occupancy of the doubly charge oxygen vacancy as compared to the bigger nanostructures where singly charged oxygen vacancy predominates. This electronic alteration has been seen to trigger yet another interesting phenomenon, described as positron confinement inside nanoparticles. Finally, based on all the results, a model of the structural arrangement of the intrinsic defects in the present samples has been reconciled.

  6. The grain size(s) of Black Hills Quartzite deformed in the dislocation creep regime

    NASA Astrophysics Data System (ADS)

    Heilbronner, Renée; Kilian, Rüdiger

    2017-10-01

    General shear experiments on Black Hills Quartzite (BHQ) deformed in the dislocation creep regimes 1 to 3 have been previously analyzed using the CIP method (Heilbronner and Tullis, 2002, 2006). They are reexamined using the higher spatial and orientational resolution of EBSD. Criteria for coherent segmentations based on c-axis orientation and on full crystallographic orientations are determined. Texture domains of preferred c-axis orientation (Y and B domains) are extracted and analyzed separately. Subdomains are recognized, and their shape and size are related to the kinematic framework and the original grains in the BHQ. Grain size analysis is carried out for all samples, high- and low-strain samples, and separately for a number of texture domains. When comparing the results to the recrystallized quartz piezometer of Stipp and Tullis (2003), it is found that grain sizes are consistently larger for a given flow stress. It is therefore suggested that the recrystallized grain size also depends on texture, grain-scale deformation intensity, and the kinematic framework (of axial vs. general shear experiments).

  7. The characterization of four gene expression analysis in circulating tumor cells made by Multiplex-PCR from the AdnaTest kit on the lab-on-a-chip Agilent DNA 1000 platform.

    PubMed

    Škereňová, Markéta; Mikulová, Veronika; Čapoun, Otakar; Zima, Tomáš

    2016-01-01

    Nowadays, on-a-chip capillary electrophoresis is a routine method for the detection of PCR fragments. The Agilent 2100 Bioanalyzer was one of the first commercial devices in this field. Our project was designed to study the characteristics of Agilent DNA 1000 kit in PCR fragment analysis as a part of circulating tumour cell (CTC) detection technique. Despite the common use of this kit a complex analysis of the results from a long-term project is still missing. A commercially available Agilent DNA 1000 kit was used as a final step in the CTC detection (AdnaTest) for the determination of the presence of PCR fragments generated by Multiplex PCR. Data from 30 prostate cancer patients obtained during two years of research were analyzed to determine the trueness and precision of the PCR fragment size determination. Additional experiments were performed to demonstrate the precision (repeatability, reproducibility) and robustness of PCR fragment concentration determination. The trueness and precision of the size determination was below 3% and 2% respectively. The repeatability of the concentration determination was below 15%. The difference in concentration determination increases when Multiplex-PCR/storage step is added between the two measurements of one sample. The characteristics established in our study are in concordance with the manufacturer's specifications established for a ladder as a sample. However, the concentration determination may vary depending on chip preparation, sample storage and concentration. The 15% variation of concentration determination repeatability was shown to be partly proportional and can be suppressed by proper normalization.

  8. Determination of formaldehyde by HPLC as the DNPH derivative following high-volume air sampling onto bisulfite-coated cellulose filters

    NASA Astrophysics Data System (ADS)

    de Andrade, Jailson B.; Tanner, Roger L.

    A method is described for the specific collection of formaldehyde as hydroxymethanesulfonate on bisulfate-coated cellulose filters. Following extraction in aqueous acid and removal on unreacted bisulfite, the hydroxymethanesulfonate is decomposed by base, and HCHO is determined by DNPH (2,4-dinitrophenylhydrazine) derivatization and HPLC. Since the collection efficiency for formaldehyde is moderately high even when sampling ambient air at high-volume flow rates, a limit of detection of 0.2 ppbv is achieved with 30 min sampling times. Interference from acetaldehyde co-collected as 1-hydroxyethanesulfonate is <5% using this procedure. The technique shows promise for both short-term airborne sampling, and as a means of collecting mg-sized samples of HCHO on an inorganic matrix for carbon isotopic analyses.

  9. Spatial scale and sampling resolution affect measures of gap disturbance in a lowland tropical forest: implications for understanding forest regeneration and carbon storage.

    PubMed

    Lobo, Elena; Dalling, James W

    2014-03-07

    Treefall gaps play an important role in tropical forest dynamics and in determining above-ground biomass (AGB). However, our understanding of gap disturbance regimes is largely based either on surveys of forest plots that are small relative to spatial variation in gap disturbance, or on satellite imagery, which cannot accurately detect small gaps. We used high-resolution light detection and ranging data from a 1500 ha forest in Panama to: (i) determine how gap disturbance parameters are influenced by study area size, and the criteria used to define gaps; and (ii) to evaluate how accurately previous ground-based canopy height sampling can determine the size and location of gaps. We found that plot-scale disturbance parameters frequently differed significantly from those measured at the landscape-level, and that canopy height thresholds used to define gaps strongly influenced the gap-size distribution, an important metric influencing AGB. Furthermore, simulated ground surveys of canopy height frequently misrepresented the true location of gaps, which may affect conclusions about how relatively small canopy gaps affect successional processes and contribute to the maintenance of diversity. Across site comparisons need to consider how gap definition, scale and spatial resolution affect characterizations of gap disturbance, and its inferred importance for carbon storage and community composition.

  10. Multipinhole SPECT helical scan parameters and imaging volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Rutao, E-mail: rutaoyao@buffalo.edu; Deng, Xiao; Wei, Qingyang

    Purpose: The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. Methods: The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluatedmore » by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. Results: The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. Conclusions: The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters.« less

  11. Variation in aluminum, iron, and particle concentrations in oxic ground-water samples collected by use of tangential-flow ultrafiltration with low-flow sampling

    USGS Publications Warehouse

    Szabo, Z.; Oden, J.H.; Gibs, J.; Rice, D.E.; Ding, Y.; ,

    2001-01-01

    Particulates that move with ground water and those that are artificially mobilized during well purging could be incorporated into water samples during collection and could cause trace-element concentrations to vary in unfiltered samples, and possibly in filtered samples (typically 0.45-um (micron) pore size) as well, depending on the particle-size fractions present. Therefore, measured concentrations may not be representative of those in the aquifer. Ground water may contain particles of various sizes and shapes that are broadly classified as colloids, which do not settle from water, and particulates, which do. In order to investigate variations in trace-element concentrations in ground-water samples as a function of particle concentrations and particle-size fractions, the U.S. Geological Survey, in cooperation with the U.S. Air Force, collected samples from five wells completed in the unconfined, oxic Kirkwood-Cohansey aquifer system of the New Jersey Coastal Plain. Samples were collected by purging with a portable pump at low flow (0.2-0.5 liters per minute and minimal drawdown, ideally less than 0.5 foot). Unfiltered samples were collected in the following sequence: (1) within the first few minutes of pumping, (2) after initial turbidity declined and about one to two casing volumes of water had been purged, and (3) after turbidity values had stabilized at less than 1 to 5 Nephelometric Turbidity Units. Filtered samples were split concurrently through (1) a 0.45-um pore size capsule filter, (2) a 0.45-um pore size capsule filter and a 0.0029-um pore size tangential-flow filter in sequence, and (3), in selected cases, a 0.45-um and a 0.05-um pore size capsule filter in sequence. Filtered samples were collected concurrently with the unfiltered sample that was collected when turbidity values stabilized. Quality-assurance samples consisted of sequential duplicates (about 25 percent) and equipment blanks. Concentrations of particles were determined by light scattering. Variations in concentrations aluminum and iron (1 -74 and 1-199 ug/L (micrograms per liter), respectively), common indicators of the presence of particulate-borne trace elements, were greatest in sample sets from individual wells with the greatest variations in turbidity and particle concentration. Differences in trace-element concentrations in sequentially collected unfiltered samples with variable turbidity were 5 to 10 times as great as those in concurrently collected samples that were passed through various filters. These results indicate that turbidity must be both reduced and stabilized even when low-flow sample-collection techniques are used in order to obtain water samples that do not contain considerable particulate artifacts. Currently (2001) available techniques need to be refined to ensure that the measured trace-element concentrations are representative of those that are mobile in the aquifer water.

  12. The effects of substrate size, surface area, and density on coat thickness of multi-particulate dosage forms.

    PubMed

    Heinicke, Grant; Matthews, Frank; Schwartz, Joseph B

    2005-01-01

    Drugs layering experiments were performed in a fluid bed fitted with a rotor granulator insert using diltiazem as a model drug. The drug was applied in various quantities to sugar spheres of different mesh sizes to give a series of drug-layered sugar spheres (cores) of different potency, size, and weight per particle. The drug presence lowered the bulk density of the cores in proportion to the quantity of added drug. Polymer coating of each core lot was performed in a fluid bed fitted with a Wurster insert. A series of polymer-coated cores (pellets) was removed from each coating experiment. The mean diameter of each core and each pellet sample was determined by image analysis. The rate of change of diameter on polymer addition was determined for each starting size of core and compared to calculated values. The core diameter was displaced from the line of best fit through the pellet diameter data. Cores of different potency with the same size distribution were made by layering increasing quantities of drug onto sugar spheres of decreasing mesh size. Equal quantities of polymer were applied to the same-sized core lots and coat thickness was measured. Weight/weight calculations predict equal coat thickness under these conditions, but measurable differences were found. Simple corrections to core charge weight in the Wurster insert were successfully used to manufacture pellets having the same coat thickness. The sensitivity of the image analysis technique in measuring particle size distributions (PSDs) was demonstrated by measuring a displacement in PSD after addition of 0.5% w/w talc to a pellet sample.

  13. Colloid particle sizes in the Mississippi River and some of its tributaries, from Minneapolis to below New Orleans

    USGS Publications Warehouse

    Rostad, C.E.; Rees, T.F.; Daniel, S.R.

    1998-01-01

    An on-board technique was developed that combined discharge-weighted pumping to a high-speed continuous-flow centrifuge for isolation of the particulate-sized material with ultrafiltration for isolation of colloid-sized material. In order to address whether these processes changed the particle sizes during isolation, samples of particles in suspension were collected at various steps in the isolation process to evaluate changes in particle size. Particle sizes were determined using laser light-scattering photon correlation spectroscopy and indicated no change in size during the colloid isolation process. Mississippi River colloid particle sizes from twelve sites from Minneapolis to below New Orleans were compared with sizes from four tributaries and three seasons, and from predominantly autochthonous sources upstream to more allochthonous sources downstream. ?? 1998 John Wiley Sons, Ltd.

  14. The effect of sample holder material on ion mobility spectrometry reproducibility

    NASA Technical Reports Server (NTRS)

    Jadamec, J. Richard; Su, Chih-Wu; Rigdon, Stephen; Norwood, Lavan

    1995-01-01

    When a positive detection of a narcotic occurs during the search of a vessel, a decision has to be made whether further intensive search is warranted. This decision is based in part on the results of a second sample collected from the same area. Therefore, the reproducibility of both sampling and instrumental analysis is critical in terms of justifying an in depth search. As reported at the 2nd Annual IMS Conference in Quebec City, the U.S. Coast Guard has determined that when paper is utilized as the sample desorption medium for the Barringer IONSCAN, the analytical results using standard reference samples are reproducible. A study was conducted utilizing papers of varying pore sizes and comparing their performance as a desorption material relative to the standard Barringer 50 micron Teflon. Nominal pore sizes ranged from 30 microns down to 2 microns. Results indicate that there is some peak instability in the first two to three windows during the analysis. The severity of the instability was observed to increase as the pore size of the paper is decreased. However, the observed peak instability does not create a situation that results in a decreased reliability or reproducibility in the analytical result.

  15. Estimating individual glomerular volume in the human kidney: clinical perspectives.

    PubMed

    Puelles, Victor G; Zimanyi, Monika A; Samuel, Terence; Hughson, Michael D; Douglas-Denton, Rebecca N; Bertram, John F; Armitage, James A

    2012-05-01

    Measurement of individual glomerular volumes (IGV) has allowed the identification of drivers of glomerular hypertrophy in subjects without overt renal pathology. This study aims to highlight the relevance of IGV measurements with possible clinical implications and determine how many profiles must be measured in order to achieve stable size distribution estimates. We re-analysed 2250 IGV estimates obtained using the disector/Cavalieri method in 41 African and 34 Caucasian Americans. Pooled IGV analysis of mean and variance was conducted. Monte-Carlo (Jackknife) simulations determined the effect of the number of sampled glomeruli on mean IGV. Lin's concordance coefficient (R(C)), coefficient of variation (CV) and coefficient of error (CE) measured reliability. IGV mean and variance increased with overweight and hypertensive status. Superficial glomeruli were significantly smaller than juxtamedullary glomeruli in all subjects (P < 0.01), by race (P < 0.05) and in obese individuals (P < 0.01). Subjects with multiple chronic kidney disease (CKD) comorbidities showed significant increases in IGV mean and variability. Overall, mean IGV was particularly reliable with nine or more sampled glomeruli (R(C) > 0.95, <5% difference in CV and CE). These observations were not affected by a reduced sample size and did not disrupt the inverse linear correlation between mean IGV and estimated total glomerular number. Multiple comorbidities for CKD are associated with increased IGV mean and variance within subjects, including overweight, obesity and hypertension. Zonal selection and the number of sampled glomeruli do not represent drawbacks for future longitudinal biopsy-based studies of glomerular size and distribution.

  16. Total selenium in irrigation drain inflows to the Salton Sea, California, April 2009

    USGS Publications Warehouse

    May, Thomas W.; Walther, Michael J.; Saiki, Michael K.; Brumbaugh, William G.

    2009-01-01

    This report presents the results for the final sampling period (April 2009) of a 4-year monitoring program to characterize selenium concentrations in selected irrigation drains flowing into the Salton Sea, California. Total selenium and total suspended solids were determined in water samples. Total selenium, percent total organic carbon, and particle size were determined in sediments. Mean total selenium concentrations in water ranged from 0.98 to 22.9 micrograms per liter. Total selenium concentrations in sediment ranged from 0.078 to 5.0 micrograms per gram dry weight.

  17. A USANS/SANS study of the accessibility of pores in the Barnett Shale to methane and water

    USGS Publications Warehouse

    Ruppert, Leslie F.; Sakurovs, Richard; Blach, Tomasz P.; He, Lilin; Melnichenko, Yuri B.; Mildner, David F.; Alcantar-Lopez, Leo

    2013-01-01

    Shale is an increasingly important source of natural gas in the United States. The gas is held in fine pores that need to be accessed by horizontal drilling and hydrofracturing techniques. Understanding the nature of the pores may provide clues to making gas extraction more efficient. We have investigated two Mississippian Barnett Shale samples, combining small-angle neutron scattering (SANS) and ultrasmall-angle neutron scattering (USANS) to determine the pore size distribution of the shale over the size range 10 nm to 10 μm. By adding deuterated methane (CD4) and, separately, deuterated water (D2O) to the shale, we have identified the fraction of pores that are accessible to these compounds over this size range. The total pore size distribution is essentially identical for the two samples. At pore sizes >250 nm, >85% of the pores in both samples are accessible to both CD4 and D2O. However, differences in accessibility to CD4 are observed in the smaller pore sizes (~25 nm). In one sample, CD4 penetrated the smallest pores as effectively as it did the larger ones. In the other sample, less than 70% of the smallest pores (4, but they were still largely penetrable by water, suggesting that small-scale heterogeneities in methane accessibility occur in the shale samples even though the total porosity does not differ. An additional study investigating the dependence of scattered intensity with pressure of CD4 allows for an accurate estimation of the pressure at which the scattered intensity is at a minimum. This study provides information about the composition of the material immediately surrounding the pores. Most of the accessible (open) pores in the 25 nm size range can be associated with either mineral matter or high reflectance organic material. However, a complementary scanning electron microscopy investigation shows that most of the pores in these shale samples are contained in the organic components. The neutron scattering results indicate that the pores are not equally proportioned in the different constituents within the shale. There is some indication from the SANS results that the composition of the pore-containing material varies with pore size; the pore size distribution associated with mineral matter is different from that associated with organic phases.

  18. 78 FR 54659 - Agency Information Collection Activities: Submission to OMB for Review and Approval; Public...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-05

    ... groups will be conducted with up to eight participants in each for a total sample size of 32. The second... determine eligibility for the pilot study to recruit a sample of 500 participants (50 from each clinical... to participate in an in-depth, qualitative telephone interview for a total of 100 interviews. Finally...

  19. Air Flow and Pressure Drop Measurements Across Porous Oxides

    NASA Technical Reports Server (NTRS)

    Fox, Dennis S.; Cuy, Michael D.; Werner, Roger A.

    2008-01-01

    This report summarizes the results of air flow tests across eight porous, open cell ceramic oxide samples. During ceramic specimen processing, the porosity was formed using the sacrificial template technique, with two different sizes of polystyrene beads used for the template. The samples were initially supplied with thicknesses ranging from 0.14 to 0.20 in. (0.35 to 0.50 cm) and nonuniform backside morphology (some areas dense, some porous). Samples were therefore ground to a thickness of 0.12 to 0.14 in. (0.30 to 0.35 cm) using dry 120 grit SiC paper. Pressure drop versus air flow is reported. Comparisons of samples with thickness variations are made, as are pressure drop estimates. As the density of the ceramic material increases the maximum corrected flow decreases rapidly. Future sample sets should be supplied with samples of similar thickness and having uniform surface morphology. This would allow a more consistent determination of air flow versus processing parameters and the resulting porosity size and distribution.

  20. Is the permeability of naturally fractured rocks scale dependent?

    NASA Astrophysics Data System (ADS)

    Azizmohammadi, Siroos; Matthäi, Stephan K.

    2017-09-01

    The equivalent permeability, keq of stratified fractured porous rocks and its anisotropy is important for hydrocarbon reservoir engineering, groundwater hydrology, and subsurface contaminant transport. However, it is difficult to constrain this tensor property as it is strongly influenced by infrequent large fractures. Boreholes miss them and their directional sampling bias affects the collected geostatistical data. Samples taken at any scale smaller than that of interest truncate distributions and this bias leads to an incorrect characterization and property upscaling. To better understand this sampling problem, we have investigated a collection of outcrop-data-based Discrete Fracture and Matrix (DFM) models with mechanically constrained fracture aperture distributions, trying to establish a useful Representative Elementary Volume (REV). Finite-element analysis and flow-based upscaling have been used to determine keq eigenvalues and anisotropy. While our results indicate a convergence toward a scale-invariant keq REV with increasing sample size, keq magnitude can have multi-modal distributions. REV size relates to the length of dilated fracture segments as opposed to overall fracture length. Tensor orientation and degree of anisotropy also converge with sample size. However, the REV for keq anisotropy is larger than that for keq magnitude. Across scales, tensor orientation varies spatially, reflecting inhomogeneity of the fracture patterns. Inhomogeneity is particularly pronounced where the ambient stress selectively activates late- as opposed to early (through-going) fractures. While we cannot detect any increase of keq with sample size as postulated in some earlier studies, our results highlight a strong keq anisotropy that influences scale dependence.

  1. Minimum and Maximum Times Required to Obtain Representative Suspended Sediment Samples

    NASA Astrophysics Data System (ADS)

    Gitto, A.; Venditti, J. G.; Kostaschuk, R.; Church, M. A.

    2014-12-01

    Bottle sampling is a convenient method of obtaining suspended sediment measurements for the development of sediment budgets. While these methods are generally considered to be reliable, recent analysis of depth-integrated sampling has identified considerable uncertainty in measurements of grain-size concentration between grain-size classes of multiple samples. Point-integrated bottle sampling is assumed to represent the mean concentration of suspended sediment but the uncertainty surrounding this method is not well understood. Here we examine at-a-point variability in velocity, suspended sediment concentration, grain-size distribution, and grain-size moments to determine if traditional point-integrated methods provide a representative sample of suspended sediment. We present continuous hour-long observations of suspended sediment from the sand-bedded portion of the Fraser River at Mission, British Columbia, Canada, using a LISST laser-diffraction instrument. Spectral analysis suggests that there are no statistically significant peak in energy density, suggesting the absence of periodic fluctuations in flow and suspended sediment. However, a slope break in the spectra at 0.003 Hz corresponds to a period of 5.5 minutes. This coincides with the threshold between large-scale turbulent eddies that scale with channel width/mean velocity and hydraulic phenomena related to channel dynamics. This suggests that suspended sediment samples taken over a period longer than 5.5 minutes incorporate variability that is larger scale than turbulent phenomena in this channel. Examination of 5.5-minute periods of our time series indicate that ~20% of the time a stable mean value of volumetric concentration is reached within 30 seconds, a typical bottle sample duration. In ~12% of measurements a stable mean was not reached over the 5.5 minute sample duration. The remaining measurements achieve a stable mean in an even distribution over the intervening interval.

  2. Determination of wear metals in engine oil by mild acid digestion and energy dispersive X-ray fluorescence spectrometry using solid phase extraction disks.

    PubMed

    Yang, Zheng; Hou, Xiandeng; Jones, Bradley T

    2003-03-10

    A simple, particle size-independent spectrometric method has been developed for the multi-element determination of wear metals in used engine oil. A small aliquot (0.5 ml) of an acid-digested oil sample is spotted onto a C-18 solid phase extraction disk to form a uniform thin film. The dried disk is then analyzed directly by energy dispersive X-ray fluorescence spectrometry. This technique provides a homogeneous and reproducible sample surface to the instrument, thus overcoming the typical problems associated with uneven particle size distribution and sedimentation. As a result, the method provides higher precision and accuracy than conventional methods. Furthermore, the disk sample may be stored and re-analyzed or extracted at a later date. The signals arising from the spotted disks, and the calibration curves constructed from them, are stable for at least 2 months. The limits of detection for Fe, Cu, Zn, Pb, and Cr are 5, 1, 4, 2, and 4 microg g(-1), respectively. Recoveries of these elements from spiked oil samples range from 92 to 110%. The analysis of two standard reference materials and a used oil sample produced results comparable to those found by inductively coupled plasma atomic emission spectrometry.

  3. X-ray phase contrast imaging of objects with subpixel-size inhomogeneities: a geometrical optics model.

    PubMed

    Gasilov, Sergei V; Coan, Paola

    2012-09-01

    Several x-ray phase contrast extraction algorithms use a set of images acquired along the rocking curve of a perfect flat analyzer crystal to study the internal structure of objects. By measuring the angular shift of the rocking curve peak, one can determine the local deflections of the x-ray beam propagated through a sample. Additionally, some objects determine a broadening of the crystal rocking curve, which can be explained in terms of multiple refraction of x rays by many subpixel-size inhomogeneities contained in the sample. This fact may allow us to differentiate between materials and features characterized by different refraction properties. In the present work we derive an expression for the beam broadening in the form of a linear integral of the quantity related to statistical properties of the dielectric susceptibility distribution function of the object.

  4. Determination of Natural 14C Abundances in Dissolved Organic Carbon in Organic-Rich Marine Sediment Porewaters by Thermal Sulfate Reduction

    NASA Astrophysics Data System (ADS)

    Johnson, L.; Komada, T.

    2010-12-01

    The abundances of natural 14C in dissolved organic carbon (DOC) in the marine environment hold clues regarding the processes that influence the biogeochemical cycling of this large carbon reservoir. At present, UV irradiation is the widely accepted method for oxidizing seawater DOC for determination of their 14C abundances. This technique yields precise and accurate values with low blanks, but it requires a dedicated vacuum line, and hence can be difficult to implement. As an alternative technique that can be conducted on a standard preparatory vacuum line, we modified and tested a thermal sulfate reduction method that was previously developed to determine δ13C values of marine DOC (Fry B. et al., 1996. Analysis of marine DOC using a dry combustion method. Mar. Chem., 54: 191-201.) to determine the 14C abundances of DOC in marine sediment porewaters. In this method, the sample is dried in a 100 ml round-bottom Pyrex flask in the presence of excess oxidant (K2SO4) and acid (H3PO4), and combusted at 550 deg.C. The combustion products are cryogenically processed to collect and quantify CO2 using standard procedures. Materials we have oxidized to date range from 6-24 ml in volume, and 95-1500 μgC in size. The oxidation efficiency of this method was tested by processing known amounts of reagent-grade dextrose and sucrose (as examples of labile organic matter), tannic acid and humic acid (as examples of complex natural organic matter), and porewater DOC extracted from organic-rich nearshore sediments. The carbon yields for all of these materials averaged 99±4% (n=18). The 14C abundances of standard materials IAEA C-6 and IAEA C-5 processed by this method using >1mgC aliquots were within error of certified values. The size and the isotopic value of the blank were determined by a standard dilution technique using IAEA C-6 and IAEA C-5 that ranged in size from 150 to 1500 μgC (n=4 and 2, respectively). This yielded a blank size of 6.7±0.7 μgC, and a blank isotopic value of 0.54±0.05 fMC. The size of the blank agreed well with that determined directly by processing variable volumes of UV-irradiated deionized water (5.6±0.7 μgC, n=9). The size of the blank amounts to <~5% of the size of porewater DOC samples that are typically recovered from organic-rich sediment cores (~100-500 μgC). The fMC value of the blank suggests that there may be multiple sources of extraneous carbon that range in 14C abundance. In order to assess the fidelity of 14C abundances in natural porewater DOC oxidized by thermal sulfate reduction, we oxidized porewater DOC samples collected from the central floor of the Santa Monica Basin, California Borderland, using both this method and UV irradiation (the latter carried out at the Druffel laboratory, University of California Irvine). The fMC values obtained by the two methods agreed within error. Carbon yields from the two methods also agreed closely. These findings show that thermal sulfate reduction may be a promising method to oxidize small, concentrated marine DOC samples for 14C analysis.

  5. A multi-particle crushing apparatus for studying rock fragmentation due to repeated impacts

    NASA Astrophysics Data System (ADS)

    Huang, S.; Mohanty, B.; Xia, K.

    2017-12-01

    Rock crushing is a common process in mining and related operations. Although a number of particle crushing tests have been proposed in the literature, most of them are concerned with single-particle crushing, i.e., a single rock sample is crushed in each test. Considering the realistic scenario in crushers where many fragments are involved, a laboratory crushing apparatus is developed in this study. This device consists of a Hopkinson pressure bar system and a piston-holder system. The Hopkinson pressure bar system is used to apply calibrated dynamic loads to the piston-holder system, and the piston-holder system is used to hold rock samples and to recover fragments for subsequent particle size analysis. The rock samples are subjected to three to seven impacts under three impact velocities (2.2, 3.8, and 5.0 m/s), with the feed size of the rock particle samples limited between 9.5 and 12.7 mm. Several key parameters are determined from this test, including particle size distribution parameters, impact velocity, loading pressure, and total work. The results show that the total work correlates well with resulting fragmentation size distribution, and the apparatus provides a useful tool for studying the mechanism of crushing, which further provides guidelines for the design of commercial crushers.

  6. Formation of metallic clusters in oxide insulators by means of ion beam mixing

    NASA Astrophysics Data System (ADS)

    Talut, G.; Potzger, K.; Mücklich, A.; Zhou, Shengqiang

    2008-04-01

    The intermixing and near-interface cluster formation of Pt and FePt thin films deposited on different oxide surfaces by means of Pt+ ion irradiation and subsequent annealing was investigated. Irradiated as well as postannealed samples were investigated using high resolution transmission electron microscopy. In MgO and Y :ZrO2 covered with Pt, crystalline clusters with mean sizes of 2 and 3.5nm were found after the Pt+ irradiations with 8×1015 and 2×1016cm-2 and subsequent annealing, respectively. In MgO samples covered with FePt, clusters with mean sizes of 1 and 2nm were found after the Pt+ irradiations with 8×1015 and 2×1016cm-2 and subsequent annealing, respectively. In Y :ZrO2 samples covered with FePt, clusters up to 5nm in size were found after the Pt+ irradiation with 2×1016cm-2 and subsequent annealing. In LaAlO3 the irradiation was accompanied by a full amorphization of the host matrix and appearance of embedded clusters of different sizes. The determination of the lattice constant and thus the kind of the clusters in samples covered by FePt was hindered due to strong deviation of the electron beam by the ferromagnetic FePt.

  7. Effect of field view size and lighting on unique-hue selection using Natural Color System object colors.

    PubMed

    Shamey, Renzo; Zubair, Muhammad; Cheema, Hammad

    2015-08-01

    The aim of this study was twofold, first to determine the effect of field view size and second of illumination conditions on the selection of unique hue samples (UHs: R, Y, G and B) from two rotatable trays, each containing forty highly chromatic Natural Color System (NCS) samples, on one tray corresponding to 1.4° and on the other to 5.7° field of view size. UH selections were made by 25 color-normal observers who repeated assessments three times with a gap of at least 24h between trials. Observers separately assessed UHs under four illumination conditions simulating illuminants D65, A, F2 and F11. An apparent hue shift (statistically significant for UR) was noted for UH selections at 5.7° field of view compared to those at 1.4°. Observers' overall variability was found to be higher for UH stimuli selections at the larger field of view. Intra-observer variability was found to be approximately 18.7% of inter-observer variability in selection of samples for both sample sizes. The highest intra-observer variability was under simulated illuminant D65, followed by A, F11, and F2. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Aggregate size and structure determination of nanomaterials in physiological media: importance of dynamic evolution

    NASA Astrophysics Data System (ADS)

    Afrooz, A. R. M. Nabiul; Hussain, Saber M.; Saleh, Navid B.

    2014-12-01

    Most in vitro nanotoxicological assays are performed after 24 h exposure. However, in determining size and shape effect of nanoparticles in toxicity assays, initial characterization data are generally used to describe experimental outcome. The dynamic size and structure of aggregates are typically ignored in these studies. This brief communication reports dynamic evolution of aggregation characteristics of gold nanoparticles. The study finds that gradual increase in aggregate size of gold nanospheres (AuNS) occurs up to 6 h duration; beyond this time period, the aggregation process deviates from gradual to a more abrupt behavior as large networks are formed. Results of the study also show that aggregated clusters possess unique structural conformation depending on nominal diameter of the nanoparticles. The differences in fractal dimensions of the AuNS samples likely occurred due to geometric differences, causing larger packing propensities for smaller sized particles. Both such observations can have profound influence on dosimetry for in vitro nanotoxicity analyses.

  9. Development of a magnetic lab-on-a-chip for point-of-care sepsis diagnosis

    NASA Astrophysics Data System (ADS)

    Schotter, Joerg; Shoshi, Astrit; Brueckl, Hubert

    2009-05-01

    We present design criteria, operation principles and experimental examples of magnetic marker manipulation for our magnetic lab-on-a-chip prototype. It incorporates both magnetic sample preparation and detection by embedded GMR-type magnetoresistive sensors and is optimized for the automated point-of-care detection of four different sepsis-indicative cytokines directly from about 5 μl of whole blood. The sample volume, magnetic particle size and cytokine concentration determine the microfluidic volume, sensor size and dimensioning of the magnetic gradient field generators. By optimizing these parameters to the specific diagnostic task, best performance is expected with respect to sensitivity, analysis time and reproducibility.

  10. Influence of Size on the Microstructure and Mechanical Properties of an AISI 304L Stainless Steel—A Comparison between Bulk and Fibers

    PubMed Central

    Baldenebro-Lopez, Francisco J.; Gomez-Esparza, Cynthia D.; Corral-Higuera, Ramon; Arredondo-Rea, Susana P.; Pellegrini-Cervantes, Manuel J.; Ledezma-Sillas, Jose E.; Martinez-Sanchez, Roberto; Herrera-Ramirez, Jose M.

    2015-01-01

    In this work, the mechanical properties and microstructural features of an AISI 304L stainless steel in two presentations, bulk and fibers, were systematically studied in order to establish the relationship among microstructure, mechanical properties, manufacturing process and effect on sample size. The microstructure was analyzed by XRD, SEM and TEM techniques. The strength, Young’s modulus and elongation of the samples were determined by tensile tests, while the hardness was measured by Vickers microhardness and nanoindentation tests. The materials have been observed to possess different mechanical and microstructural properties, which are compared and discussed. PMID:28787949

  11. In situ synchrotron investigation of grain growth behavior of nano-grained UO 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miao, Yinbin; Yao, Tiankai; Lian, Jie

    Here, we report on the study of grain growth kinetics in nano-grained UO 2 samples. Dense nano-grained UO 2 samples with well-controlled stoichiometry and grain size were fabricated using the spark plasma sintering technique. To determine the grain growth kinetics at elevated temperatures, a synchrotron wide-angle X-ray scattering (WAXS) study was performed in situ to measure the real-time grain size evolution based on the modified Williamson-Hall analysis. The unique grain growth kinetics of nanocrystalline UO 2 at 730 °C and 820 °C were observed and explained by the difference in mobility of various grain boundaries.

  12. In situ synchrotron investigation of grain growth behavior of nano-grained UO 2

    DOE PAGES

    Miao, Yinbin; Yao, Tiankai; Lian, Jie; ...

    2017-01-09

    Here, we report on the study of grain growth kinetics in nano-grained UO 2 samples. Dense nano-grained UO 2 samples with well-controlled stoichiometry and grain size were fabricated using the spark plasma sintering technique. To determine the grain growth kinetics at elevated temperatures, a synchrotron wide-angle X-ray scattering (WAXS) study was performed in situ to measure the real-time grain size evolution based on the modified Williamson-Hall analysis. The unique grain growth kinetics of nanocrystalline UO 2 at 730 °C and 820 °C were observed and explained by the difference in mobility of various grain boundaries.

  13. Influence of specimen dimensions on ductile-to-brittle transition temperature in Charpy impact test

    NASA Astrophysics Data System (ADS)

    Rzepa, S.; Bucki, T.; Konopík, P.; Džugan, J.; Rund, M.; Procházka, R.

    2017-02-01

    This paper discusses the correlation between specimen dimensions and transition temperature. Notch toughness properties of Standard Charpy-V specimens are compared to samples with lower width (7.5 mm, 5 mm, 2.5 mm) and sub-size Charpy specimens with cross section 3×4. In this study transition curves are correlated with lateral ductile part of fracture related ones for 5 considered geometries. Based on the results obtained, correlation procedure for transition temperature determination of full size specimens defined by fracture appearance of sub-sized specimens is proposed.

  14. OCT Amplitude and Speckle Statistics of Discrete Random Media.

    PubMed

    Almasian, Mitra; van Leeuwen, Ton G; Faber, Dirk J

    2017-11-01

    Speckle, amplitude fluctuations in optical coherence tomography (OCT) images, contains information on sub-resolution structural properties of the imaged sample. Speckle statistics could therefore be utilized in the characterization of biological tissues. However, a rigorous theoretical framework relating OCT speckle statistics to structural tissue properties has yet to be developed. As a first step, we present a theoretical description of OCT speckle, relating the OCT amplitude variance to size and organization for samples of discrete random media (DRM). Starting the calculations from the size and organization of the scattering particles, we analytically find expressions for the OCT amplitude mean, amplitude variance, the backscattering coefficient and the scattering coefficient. We assume fully developed speckle and verify the validity of this assumption by experiments on controlled samples of silica microspheres suspended in water. We show that the OCT amplitude variance is sensitive to sub-resolution changes in size and organization of the scattering particles. Experimentally determined and theoretically calculated optical properties are compared and in good agreement.

  15. Determining the linkage of disease-resistance genes to molecular markers: the LOD-SCORE method revisited with regard to necessary sample sizes.

    PubMed

    Hühn, M

    1995-05-01

    Some approaches to molecular marker-assisted linkage detection for a dominant disease-resistance trait based on a segregating F2 population are discussed. Analysis of two-point linkage is carried out by the traditional measure of maximum lod score. It depends on (1) the maximum-likelihood estimate of the recombination fraction between the marker and the disease-resistance gene locus, (2) the observed absolute frequencies, and (3) the unknown number of tested individuals. If one replaces the absolute frequencies by expressions depending on the unknown sample size and the maximum-likelihood estimate of recombination value, the conventional rule for significant linkage (maximum lod score exceeds a given linkage threshold) can be resolved for the sample size. For each sub-population used for linkage analysis [susceptible (= recessive) individuals, resistant (= dominant) individuals, complete F2] this approach gives a lower bound for the necessary number of individuals required for the detection of significant two-point linkage by the lod-score method.

  16. DESCARTES' RULE OF SIGNS AND THE IDENTIFIABILITY OF POPULATION DEMOGRAPHIC MODELS FROM GENOMIC VARIATION DATA.

    PubMed

    Bhaskar, Anand; Song, Yun S

    2014-01-01

    The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the "folded" SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes' rule of signs for polynomials to the Laplace transform of piecewise continuous functions.

  17. DESCARTES’ RULE OF SIGNS AND THE IDENTIFIABILITY OF POPULATION DEMOGRAPHIC MODELS FROM GENOMIC VARIATION DATA1

    PubMed Central

    Bhaskar, Anand; Song, Yun S.

    2016-01-01

    The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the “folded” SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes’ rule of signs for polynomials to the Laplace transform of piecewise continuous functions. PMID:28018011

  18. Recent Structural Evolution of Early-Type Galaxies: Size Growth from z = 1 to z = 0

    NASA Astrophysics Data System (ADS)

    van der Wel, Arjen; Holden, Bradford P.; Zirm, Andrew W.; Franx, Marijn; Rettura, Alessandro; Illingworth, Garth D.; Ford, Holland C.

    2008-11-01

    Strong size and internal density evolution of early-type galaxies between z ~ 2 and the present has been reported by several authors. Here we analyze samples of nearby and distant (z ~ 1) galaxies with dynamically measured masses in order to confirm the previous, model-dependent results and constrain the uncertainties that may play a role. Velocity dispersion (σ) measurements are taken from the literature for 50 morphologically selected 0.8 < z < 1.2 field and cluster early-type galaxies with typical masses Mdyn = 2 × 1011 M⊙. Sizes (Reff) are determined with Advanced Camera for Surveys imaging. We compare the distant sample with a large sample of nearby (0.04 < z < 0.08) early-type galaxies extracted from the Sloan Digital Sky Survey for which we determine sizes, masses, and densities in a consistent manner, using simulations to quantify systematic differences between the size measurements of nearby and distant galaxies. We find a highly significant difference between the σ - Reff distributions of the nearby and distant samples, regardless of sample selection effects. The implied evolution in Reff at fixed mass between z = 1 and the present is a factor of 1.97 +/- 0.15. This is in qualitative agreement with semianalytic models; however, the observed evolution is much faster than the predicted evolution. Our results reinforce and are quantitatively consistent with previous, photometric studies that found size evolution of up to a factor of 5 since z ~ 2. A combination of structural evolution of individual galaxies through the accretion of companions and the continuous formation of early-type galaxies through increasingly gas-poor mergers is one plausible explanation of the observations. Based on observations with the Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555, and observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under NASA contract 1407. Based on observations collected at the European Southern Observatory, Chile (169.A-0458). Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation.

  19. Slurry sampling high-resolution continuum source electrothermal atomic absorption spectrometry for direct beryllium determination in soil and sediment samples after elimination of SiO interference by least-squares background correction.

    PubMed

    Husáková, Lenka; Urbanová, Iva; Šafránková, Michaela; Šídová, Tereza

    2017-12-01

    In this work a simple, efficient, and environmentally-friendly method is proposed for determination of Be in soil and sediment samples employing slurry sampling and high-resolution continuum source electrothermal atomic absorption spectrometry (HR-CS-ETAAS). The spectral effects originating from SiO species were identified and successfully corrected by means of a mathematical correction algorithm. Fractional factorial design has been employed to assess the parameters affecting the analytical results and especially to help in the development of the slurry preparation and optimization of measuring conditions. The effects of seven analytical variables including particle size, concentration of glycerol and HNO 3 for stabilization and analyte extraction, respectively, the effect of ultrasonic agitation for slurry homogenization, concentration of chemical modifier, pyrolysis and atomization temperature were investigated by a 2 7-3 replicate (n = 3) design. Using the optimized experimental conditions, the proposed method allowed the determination of Be with a detection limit being 0.016mgkg -1 and characteristic mass 1.3pg. Optimum results were obtained after preparing the slurries by weighing 100mg of a sample with particle size < 54µm and adding 25mL of 20% w/w glycerol. The use of 1μg Rh and 50μg citric acid was found satisfactory for the analyte stabilization. Accurate data were obtained with the use of matrix-free calibration. The accuracy of the method was confirmed by analysis of two certified reference materials (NIST SRM 2702 Inorganics in Marine Sediment and IGI BIL-1 Baikal Bottom Silt) and by comparison of the results obtained for ten real samples by slurry sampling with those determined after microwave-assisted extraction by inductively coupled plasma time of flight mass spectrometry (TOF-ICP-MS). The reported method has a precision better than 7%. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Sample Size Requirements for Studies of Treatment Effects on Beta-Cell Function in Newly Diagnosed Type 1 Diabetes

    PubMed Central

    Lachin, John M.; McGee, Paula L.; Greenbaum, Carla J.; Palmer, Jerry; Gottlieb, Peter; Skyler, Jay

    2011-01-01

    Preservation of -cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(), log(+1) and square-root transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8–12 years of age, adolescents (13–17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13–17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(+1) and transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately evaluate the sample size for studies of new agents to preserve C-peptide levels in newly diagnosed type 1 diabetes. PMID:22102862

  1. Sample size requirements for studies of treatment effects on beta-cell function in newly diagnosed type 1 diabetes.

    PubMed

    Lachin, John M; McGee, Paula L; Greenbaum, Carla J; Palmer, Jerry; Pescovitz, Mark D; Gottlieb, Peter; Skyler, Jay

    2011-01-01

    Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(x), log(x+1) and square-root (√x) transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1) and √x transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately evaluate the sample size for studies of new agents to preserve C-peptide levels in newly diagnosed type 1 diabetes.

  2. Size-segregated sugar composition of transported dust aerosols from Middle-East over Delhi during March 2012

    NASA Astrophysics Data System (ADS)

    Kumar, S.; Aggarwal, S. G.; Fu, P. Q.; Kang, M.; Sarangi, B.; Sinha, D.; Kotnala, R. K.

    2017-06-01

    During March 20-22, 2012 Delhi experienced a massive dust-storm which originated in Middle-East. Size segregated sampling of these dust aerosols was performed using a nine staged Andersen sampler (5 sets of samples were collected including before dust-storm (BDS)), dust-storm day 1 to 3 (DS1 to DS3) and after dust storm (ADS). Sugars (mono and disaccharides, sugar-alcohols and anhydro-sugars) were determined using GC-MS technique. It was observed that on the onset of dust-storm, total suspended particulate matter (TSPM, sum of all stages) concentration in DS1 sample increased by > 2.5 folds compared to that of BDS samples. Interestingly, fine particulate matter (sum of stages with cutoff size < 2.1 μm) loading in DS1 also increased by > 2.5 folds as compared to that of BDS samples. Sugars analyzed in DS1 coarse mode (sum of stages with cutoff size > 2.1 μm) samples showed a considerable increase ( 1.7-2.8 folds) compared to that of other samples. It was further observed that mono-saccharides, disaccharides and sugar-alcohols concentrations were enhanced in giant (> 9.0 μm) particles in DS1 samples as compared to other samples. On the other hand, anhydro-sugars comprised 13-27% of sugars in coarse mode particles and were mostly found in fine mode constituting 66-85% of sugars in all the sample types. Trehalose showed an enhanced ( 2-4 folds) concentration in DS1 aerosol samples in both coarse (62.80 ng/m3) and fine (8.57 ng/m3) mode. This increase in Trehalose content in both coarse and fine mode suggests their origin to the transported desert dust and supports their candidature as an organic tracer for desert dust entrainments. Further, levoglucosan to mannosan (L/M) ratios which have been used to predict the type of biomass burning influences on aerosols are found to be size dependent in these samples. These ratios are higher for fine mode particles, hence should be used with caution while interpreting the sources using this tool.

  3. Application of magnetic techniques to lateral hydrocarbon migration - Lower Tertiary reservoir systems, UK North Sea

    NASA Astrophysics Data System (ADS)

    Badejo, S. A.; Muxworthy, A. R.; Fraser, A.

    2017-12-01

    Pyrolysis experiments show that magnetic minerals can be produced inorganically during oil formation in the `oil-kitchen'. Here we try to identify a magnetic proxy that can be used to trace hydrocarbon migration pathways by determining the morphology, abundance, mineralogy and size of the magnetic minerals present in reservoirs. We address this by examining the Tay formation in the Western Central Graben in the North Sea. The Tertiary sandstones are undeformed and laterally continuous in the form of an east-west trending channel, facilitating long distance updip migration of oil and gas to the west. We have collected 179 samples from 20 oil-stained wells and 15 samples from three dry wells from the British Geological Survey Core Repository. Samples were selected based on geological observations (water-wet sandstone, oil-stained sandstone, siltstones and shale). The magnetic properties of the samples were determined using room-temperature measurements on a Vibrating Sample Magnetometer (VSM), low-temperature (0-300K) measurements on a Magnetic Property Measurement System (MPMS) and high-temperature (300-973K) measurements on a Kappabridge susceptibility meter. We identified magnetite, pyrrhotite, pyrite and siderite in the samples. An increasing presence of ferrimagnetic iron sulphides is noticed along the known hydrocarbon migration pathway. Our initial results suggest mineralogy coupled with changes in grain size are possible proxies for hydrocarbon migration.

  4. Bacterial community structure in atmospheric particulate matters of different sizes during the haze days in Xi'an, China.

    PubMed

    Lu, Rui; Li, Yanpeng; Li, Wanxin; Xie, Zhengsheng; Fan, Chunlan; Liu, Pengxia; Deng, Shunxi

    2018-05-09

    Serious air pollution events have frequently occurred in China associated with the acceleration of urbanization and industrialization in recent years. Exposure to atmospheric particulate matter (PM) of high concentration can lead to adverse effects on human health. Airborne bacteria are important constituents of microbial aerosols and contain lots of pathogens. However, variations in bacterial community structure in atmospheric PM of different sizes (PM 2.5 , PM 10 and TSP) have not yet been explored. In this study, PM samples of different sizes were collected during the hazy days from Jul.2016 to Apr.2017 to determine bacterial diversity and community structure. Samples from soils and leaf surfaces were also collected to determine potential sources of bacterial aerosols. High-throughput sequencing technology was used generate bacterial community profiles, where we determined their diversity and abundances in the samples. Results showed that the dominant bacterial community structures in PM 2.5 , PM 10 and TSP were strongly similar. Compared with non-haze days, the relative abundances of most bacterial pathogens on the haze days did not increase. Meanwhile, temperature, O 3 and NO 2 had more significant effects on bacterial community than the other environmental factors. Source tracking analysis indicated that the airborne bacteria might be not from local environment. It may come from the entire city or other regions by long distance airflow transport. Results of this study improved our understanding of the influence of bioaerosols on human health and the potential sources of airborne microbes. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Sample size calculations for stepped wedge and cluster randomised trials: a unified approach

    PubMed Central

    Hemming, Karla; Taljaard, Monica

    2016-01-01

    Objectives To clarify and illustrate sample size calculations for the cross-sectional stepped wedge cluster randomized trial (SW-CRT) and to present a simple approach for comparing the efficiencies of competing designs within a unified framework. Study Design and Setting We summarize design effects for the SW-CRT, the parallel cluster randomized trial (CRT), and the parallel cluster randomized trial with before and after observations (CRT-BA), assuming cross-sectional samples are selected over time. We present new formulas that enable trialists to determine the required cluster size for a given number of clusters. We illustrate by example how to implement the presented design effects and give practical guidance on the design of stepped wedge studies. Results For a fixed total cluster size, the choice of study design that provides the greatest power depends on the intracluster correlation coefficient (ICC) and the cluster size. When the ICC is small, the CRT tends to be more efficient; when the ICC is large, the SW-CRT tends to be more efficient and can serve as an alternative design when the CRT is an infeasible design. Conclusion Our unified approach allows trialists to easily compare the efficiencies of three competing designs to inform the decision about the most efficient design in a given scenario. PMID:26344808

  6. Size-segregated urban aerosol characterization by electron microscopy and dynamic light scattering and influence of sample preparation

    NASA Astrophysics Data System (ADS)

    Marvanová, Soňa; Kulich, Pavel; Skoupý, Radim; Hubatka, František; Ciganek, Miroslav; Bendl, Jan; Hovorka, Jan; Machala, Miroslav

    2018-04-01

    Size-segregated particulate matter (PM) is frequently used in chemical and toxicological studies. Nevertheless, toxicological in vitro studies working with the whole particles often lack a proper evaluation of PM real size distribution and characterization of agglomeration under the experimental conditions. In this study, changes in particle size distributions during the PM sample manipulation and also semiquantitative elemental composition of single particles were evaluated. Coarse (1-10 μm), upper accumulation (0.5-1 μm), lower accumulation (0.17-0.5 μm), and ultrafine (<0.17 μm) PM fractions were collected by high volume cascade impactor in Prague city center. Particles were examined using electron microscopy and their elemental composition was determined by energy dispersive X-ray spectroscopy. Larger or smaller particles, not corresponding to the impaction cut points, were found in all fractions, as they occur in agglomerates and are impacted according to their aerodynamic diameter. Elemental composition of particles in size-segregated fractions varied significantly. Ns-soot occurred in all size fractions. Metallic nanospheres were found in accumulation fractions, but not in ultrafine fraction where ns-soot, carbonaceous particles, and inorganic salts were identified. Dynamic light scattering was used to measure particle size distribution in water and in cell culture media. PM suspension of lower accumulation fraction in water agglomerated after freezing/thawing the sample, and the agglomerates were disrupted by subsequent sonication. Ultrafine fraction did not agglomerate after freezing/thawing the sample. Both lower accumulation and ultrafine fractions were stable in cell culture media with fetal bovine serum, while high agglomeration occurred in media without fetal bovine serum as measured during 24 h.

  7. Estimation of within-stratum variance for sample allocation: Foreign commodity production forecasting

    NASA Technical Reports Server (NTRS)

    Chhikara, R. S.; Perry, C. R., Jr. (Principal Investigator)

    1980-01-01

    The problem of determining the stratum variances required for an optimum sample allocation for remotely sensed crop surveys is investigated with emphasis on an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical statistics is developed for obtaining initial estimates of stratum variances. The procedure is applied to variance for wheat in the U.S. Great Plains and is evaluated based on the numerical results obtained. It is shown that the proposed technique is viable and performs satisfactorily with the use of a conservative value (smaller than the expected value) for the field size and with the use of crop statistics from the small political division level.

  8. Far Field Modeling Methods For Characterizing Surface Detonations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrett, A.

    2015-10-08

    Savannah River National Laboratory (SRNL) analyzed particle samples collected during experiments that were designed to replicate tests of nuclear weapons components that involve detonation of high explosives (HE). SRNL collected the particle samples in the HE debris cloud using innovative rocket propelled samplers. SRNL used scanning electronic microscopy to determine the elemental constituents of the particles and their size distributions. Depleted uranium composed about 7% of the particle contents. SRNL used the particle size distributions and elemental composition to perform transport calculations that indicate in many terrains and atmospheric conditions the uranium bearing particles will be transported long distances downwind.more » This research established that HE tests specific to nuclear proliferation should be detectable at long downwind distances by sampling airborne particles created by the test detonations.« less

  9. Comparative study of Ni and Cu doped ZnO nanoparticles: Structural and optical properties

    NASA Astrophysics Data System (ADS)

    Thakur, Shaveta; Thakur, Samita; Sharma, Jyoti; Kumar, Sanjay

    2018-05-01

    Nanoparticles of undoped and doped (0.1 M Ni2+ and Cu2+) ZnO are synthesized using chemical precipitation method. The crystallite size, morphology, chemical bonding and optical properties of as prepared nanoparticles are determined by X-ray diffraction (XRD), Scanning electron microscopy (SEM), Fourier transform infrared (FTIR) spectroscopy and UV-visible spectra. XRD analysis shows that the prepared samples are single phase and have hexagonal wurtzite structure. The crystallite size of the doped and undoped nanoparticles is determined using Scherrer method. The crystallite size is found to be increased with concentration of nickel and copper. All stretching and vibrational bands are observed at their specific positions through FTIR. The increase in band gap can be attributed to the different chemical nature of dopant and host cation.

  10. Structural characterization of casein micelles: shape changes during film formation.

    PubMed

    Gebhardt, R; Vendrely, C; Kulozik, U

    2011-11-09

    The objective of this study was to determine the effect of size-fractionation by centrifugation on the film structure of casein micelles. Fractionated casein micelles in solution were asymmetrically distributed with a small distribution width as measured by dynamic light scattering. Films prepared from the size-fractionated samples showed a smooth surface in optical microscopy images and a homogeneous microstructure in atomic force micrographs. The nano- and microstructure of casein films was probed by micro-beam grazing incidence small angle x-ray scattering (μGISAXS). Compared to the solution measurements, the sizes determined in the film were larger and broadly distributed. The measured GISAXS patterns clearly deviate from those simulated for a sphere and suggest a deformation of the casein micelles in the film. © 2011 IOP Publishing Ltd

  11. Environmental factors affecting soil metals near outlet roads in Poznań, Poland: impact of grain size, soil depth, and wind dispersal.

    PubMed

    Ciazela, Jakub; Siepak, Marcin

    2016-06-01

    We determined the Cd, Cr, Cu, Ni, Pb, and Zn concentrations in soil samples collected along the eight main outlet roads of Poznań. Samples were collected at distances of 1, 5, and 10 m from the roadway edges at depth intervals of 0-20 and 40-60 cm. The metal content was determined in seven grain size fractions. The highest metal concentrations were observed in the smallest fraction (<0.063 mm), which were up to four times higher than those in sand fractions. Soil Pb, Cu, and Zn (and to a lesser extent Ni, Cr, and Cd) all increased in relation to the geochemical background. At most sampling sites, metal concentrations decreased with increasing distance from roadway edges and increasing depth. In some locations, the accumulation of metals in soils appears to be strongly influenced by wind direction. Our survey findings should contribute in predicting the behavior of metals along outlet road, which is important by assessing sources for further migration of heavy metals into the groundwater, plants, and humans.

  12. A rational approach to legacy data validation when transitioning between electronic health record systems.

    PubMed

    Pageler, Natalie M; Grazier G'Sell, Max Jacob; Chandler, Warren; Mailes, Emily; Yang, Christine; Longhurst, Christopher A

    2016-09-01

    The objective of this project was to use statistical techniques to determine the completeness and accuracy of data migrated during electronic health record conversion. Data validation during migration consists of mapped record testing and validation of a sample of the data for completeness and accuracy. We statistically determined a randomized sample size for each data type based on the desired confidence level and error limits. The only error identified in the post go-live period was a failure to migrate some clinical notes, which was unrelated to the validation process. No errors in the migrated data were found during the 12- month post-implementation period. Compared to the typical industry approach, we have demonstrated that a statistical approach to sampling size for data validation can ensure consistent confidence levels while maximizing efficiency of the validation process during a major electronic health record conversion. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Problems in determining the surface density of the Galactic disk

    NASA Technical Reports Server (NTRS)

    Statler, Thomas S.

    1989-01-01

    A new method is presented for determining the local surface density of the Galactic disk from distance and velocity measurements of stars toward the Galactic poles. The procedure is fully three-dimensional, approximating the Galactic potential by a potential of Staeckel form and using the analytic third integral to treat the tilt and the change of shape of the velocity ellipsoid consistently. Applying the procedure to artificial data superficially resembling the K dwarf sample of Kuijken and Gilmore (1988, 1989), it is shown that the current best estimates of local disk surface density are uncertain by at least 30 percent. Of this, about 25 percent is due to the size of the velocity sample, about 15 percent comes from uncertainties in the rotation curve and the solar galactocentric distance, and about 10 percent from ignorance of the shape of the velocity distribution above z = 1 kpc, the errors adding in quadrature. Increasing the sample size by a factor of 3 will reduce the error to 20 percent. To achieve 10 percent accuracy, observations will be needed along other lines of sight to constrain the shape of the velocity ellipsoid.

  14. Porosity of the Marcellus Shale: A contrast matching small-angle neutron scattering study

    USGS Publications Warehouse

    Bahadur, Jitendra; Ruppert, Leslie F.; Pipich, Vitaliy; Sakurovs, Richard; Melnichenko, Yuri B.

    2018-01-01

    Neutron scattering techniques were used to determine the effect of mineral matter on the accessibility of water and toluene to pores in the Devonian Marcellus Shale. Three Marcellus Shale samples, representing quartz-rich, clay-rich, and carbonate-rich facies, were examined using contrast matching small-angle neutron scattering (CM-SANS) at ambient pressure and temperature. Contrast matching compositions of H2O, D2O and toluene, deuterated toluene were used to probe open and closed pores of these three shale samples. Results show that although the mean pore radius was approximately the same for all three samples, the fractal dimension of the quartz-rich sample was higher than for the clay-rich and carbonate-rich samples, indicating different pore size distributions among the samples. The number density of pores was highest in the clay-rich sample and lowest in the quartz-rich sample. Contrast matching with water and toluene mixtures shows that the accessibility of pores to water and toluene also varied among the samples. In general, water accessed approximately 70–80% of the larger pores (>80 nm radius) in all three samples. At smaller pore sizes (~5–80 nm radius), the fraction of accessible pores decreases. The lowest accessibility to both fluids is at pore throat size of ~25 nm radii with the quartz-rich sample exhibiting lower accessibility than the clay- and carbonate-rich samples. The mechanism for this behaviour is unclear, but because the mineralogy of the three samples varies, it is likely that the inaccessible pores in this size range are associated with organics and not a specific mineral within the samples. At even smaller pore sizes (~<2.5 nm radius), in all samples, the fraction of accessible pores to water increases again to approximately 70–80%. Accessibility to toluene generally follows that of water; however, in the smallest pores (~<2.5 nm radius), accessibility to toluene decreases, especially in the clay-rich sample which contains about 30% more closed pores than the quartz- and carbonate-rich samples. Results from this study show that mineralogy of producing intervals within a shale reservoir can affect accessibility of pores to water and toluene and these mineralogic differences may affect hydrocarbon storage and production and hydraulic fracturing characteristics

  15. Effect of soil texture and chemical properties on laboratory-generated dust emissions from SW North America

    NASA Astrophysics Data System (ADS)

    Mockford, T.; Zobeck, T. M.; Lee, J. A.; Gill, T. E.; Dominguez, M. A.; Peinado, P.

    2012-12-01

    Understanding the controls of mineral dust emissions and their particle size distributions during wind-erosion events is critical as dust particles play a significant impact in shaping the earth's climate. It has been suggested that emission rates and particle size distributions are independent of soil chemistry and soil texture. In this study, 45 samples of wind-erodible surface soils from the Southern High Plains and Chihuahuan Desert regions of Texas, New Mexico, Colorado and Chihuahua were analyzed by the Lubbock Dust Generation, Analysis and Sampling System (LDGASS) and a Beckman-Coulter particle multisizer. The LDGASS created dust emissions in a controlled laboratory setting using a rotating arm which allows particle collisions. The emitted dust was transferred to a chamber where particulate matter concentration was recorded using a DataRam and MiniVol filter and dust particle size distribution was recorded using a GRIMM particle analyzer. Particle size analysis was also determined from samples deposited on the Mini-Vol filters using a Beckman-Coulter particle multisizer. Soil textures of source samples ranged from sands and sandy loams to clays and silts. Initial results suggest that total dust emissions increased with increasing soil clay and silt content and decreased with increasing sand content. Particle size distribution analysis showed a similar relationship; soils with high silt content produced the widest range of dust particle sizes and the smallest dust particles. Sand grains seem to produce the largest dust particles. Chemical control of dust emissions by calcium carbonate content will also be discussed.

  16. Probing the Magnetic Causes of CMEs: Free Magnetic Energy More Important Than Either Size Or Twist

    NASA Technical Reports Server (NTRS)

    Falconer, D. A.; Moore, R. L.; Gary, G. A.

    2006-01-01

    To probe the magnetic causes of CMEs, we have examined three types of magnetic measures: size, twist and total nonpotentiality (or total free magnetic energy) of an active region. Total nonpotentiality is roughly the product of size times twist. For predominately bipolar active regions, we have found that total nonpotentiality measures have the strongest correlation with future CME productivity (approx. 75% prediction success rate), while size and twist measures each have a weaker correlation with future CME productivity (approx. 65% prediction success rate) (Falconer, Moore, & Gary, ApJ, 644, 2006). For multipolar active regions, we find that the CME-prediction success rates for total nonpotentiality and size are about the same as for bipolar active regions. We also find that the size measure correlation with CME productivity is nearly all due to the contribution of size to total nonpotentiality. We have a total nonpotentiality measure that can be obtained from a line-of-sight magnetogram of the active region and that is as strongly correlated with CME productivity as are any of our total-nonpotentiality measures from deprojected vector magnetograms. We plan to further expand our sample by using MDI magnetograms of each active region in our sample to determine its total nonpotentiality and size on each day that the active region was within 30 deg. of disk center. The resulting increase in sample size will improve our statistics and allow us to investigate whether the nonpotentiality threshold for CME production is nearly the same or significantly different for multipolar regions than for bipolar regions. In addition, we will investigate the time rates of change of size and total nonpotentiality as additional causes of CME productivity.

  17. Magnetic properties of M0.3Fe2.7O4 (M = Fe, Zn and Mn) ferrites nanoparticles

    NASA Astrophysics Data System (ADS)

    Modaresi, Nahid; Afzalzadeh, Reza; Aslibeiki, Bagher; Kameli, Parviz

    2018-06-01

    In the present article a comparative study on the structural and magnetic properties of nano-sized M0.3Fe0.7Fe2O4 (M = Fe, Zn and Mn) ferrites have been reported. The X-ray diffraction (XRD) patterns show that the crystallite size depends on the cation distribution. The Rietveld refinement of XRD patterns using MAUD software determines the distribution of cations and unit cell dimensions. The magnetic measurements show that the maximum and minimum value of saturation magnetization is obtained for Zn and Mn doped samples, respectively. The peak temperature of AC magnetic susceptibility of Zn and Fe doped samples below 300 K shows the superparamagnetic behavior in these samples at room temperature. the AC susceptibility results confirm the presence of strong interactions between the nanoparticles which leads to a superspin glass state in the samples at low temperatures.

  18. Non-Destructive Evaluation of Grain Structure Using Air-Coupled Ultrasonics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belvin, A. D.; Burrell, R. K.; Cole, E.G.

    2009-08-01

    Cast material has a grain structure that is relatively non-uniform. There is a desire to evaluate the grain structure of this material non-destructively. Traditionally, grain size measurement is a destructive process involving the sectioning and metallographic imaging of the material. Generally, this is performed on a representative sample on a periodic basis. Sampling is inefficient and costly. Furthermore, the resulting data may not provide an accurate description of the entire part's average grain size or grain size variation. This project is designed to develop a non-destructive acoustic scanning technique, using Chirp waveforms, to quantify average grain size and grain sizemore » variation across the surface of a cast material. A Chirp is a signal in which the frequency increases or decreases over time (frequency modulation). As a Chirp passes through a material, the material's grains reduce the signal (attenuation) by absorbing the signal energy. Geophysics research has shown a direct correlation with Chirp wave attenuation and mean grain size in geological structures. The goal of this project is to demonstrate that Chirp waveform attenuation can be used to measure grain size and grain variation in cast metals (uranium and other materials of interest). An off-axis ultrasonic inspection technique using air-coupled ultrasonics has been developed to determine grain size in cast materials. The technique gives a uniform response across the volume of the component. This technique has been demonstrated to provide generalized trends of grain variation over the samples investigated.« less

  19. Sex Determination from Fragmented and Degenerated DNA by Amplified Product-Length Polymorphism Bidirectional SNP Analysis of Amelogenin and SRY Genes.

    PubMed

    Masuyama, Kotoka; Shojo, Hideki; Nakanishi, Hiroaki; Inokuchi, Shota; Adachi, Noboru

    2017-01-01

    Sex determination is important in archeology and anthropology for the study of past societies, cultures, and human activities. Sex determination is also one of the most important components of individual identification in criminal investigations. We developed a new method of sex determination by detecting a single-nucleotide polymorphism in the amelogenin gene using amplified product-length polymorphisms in combination with sex-determining region Y analysis. We particularly focused on the most common types of postmortem DNA damage in ancient and forensic samples: fragmentation and nucleotide modification resulting from deamination. Amplicon size was designed to be less than 60 bp to make the method more useful for analyzing degraded DNA samples. All DNA samples collected from eight Japanese individuals (four male, four female) were evaluated correctly using our method. The detection limit for accurate sex determination was determined to be 20 pg of DNA. We compared our new method with commercial short tandem repeat analysis kits using DNA samples artificially fragmented by ultraviolet irradiation. Our novel method was the most robust for highly fragmented DNA samples. To deal with allelic dropout resulting from deamination, we adopted "bidirectional analysis," which analyzed samples from both sense and antisense strands. This new method was applied to 14 Jomon individuals (3500-year-old bone samples) whose sex had been identified morphologically. We could correctly identify the sex of 11 out of 14 individuals. These results show that our method is reliable for the sex determination of highly degenerated samples.

  20. Sex Determination from Fragmented and Degenerated DNA by Amplified Product-Length Polymorphism Bidirectional SNP Analysis of Amelogenin and SRY Genes

    PubMed Central

    Masuyama, Kotoka; Shojo, Hideki; Nakanishi, Hiroaki; Inokuchi, Shota; Adachi, Noboru

    2017-01-01

    Sex determination is important in archeology and anthropology for the study of past societies, cultures, and human activities. Sex determination is also one of the most important components of individual identification in criminal investigations. We developed a new method of sex determination by detecting a single-nucleotide polymorphism in the amelogenin gene using amplified product-length polymorphisms in combination with sex-determining region Y analysis. We particularly focused on the most common types of postmortem DNA damage in ancient and forensic samples: fragmentation and nucleotide modification resulting from deamination. Amplicon size was designed to be less than 60 bp to make the method more useful for analyzing degraded DNA samples. All DNA samples collected from eight Japanese individuals (four male, four female) were evaluated correctly using our method. The detection limit for accurate sex determination was determined to be 20 pg of DNA. We compared our new method with commercial short tandem repeat analysis kits using DNA samples artificially fragmented by ultraviolet irradiation. Our novel method was the most robust for highly fragmented DNA samples. To deal with allelic dropout resulting from deamination, we adopted “bidirectional analysis,” which analyzed samples from both sense and antisense strands. This new method was applied to 14 Jomon individuals (3500-year-old bone samples) whose sex had been identified morphologically. We could correctly identify the sex of 11 out of 14 individuals. These results show that our method is reliable for the sex determination of highly degenerated samples. PMID:28052096

  1. Background concentrations of metals in soils from selected regions in the State of Washington

    USGS Publications Warehouse

    Ames, K.C.; Prych, E.A.

    1995-01-01

    Soil samples from 60 sites in the State of Washington were collected and analyzed to determine the magnitude and variability of background concen- trations of metals in soils of the State. Samples were collected in areas that were relatively undisturbed by human activity from the most pre- dominant soils in 12 different regions that are representative of large areas of Washington State. Concentrations of metals were determined by five different laboratory methods. Concentrations of mercury and nickel determined by both the total and total-recoverable methods displayed the greatest variability, followed by chromium and copper determined by the total-recoverable method. Concentrations of other metals, such as aluminum and barium determined by the total method, varied less. Most metals concentrations were found to be more nearly log-normally than normally distributed. Total metals concentrations were not significantly different among the different regions. However, total-recoverable metals concentrations were not as similar among different regions. Cluster analysis revealed that sampling sites in three regions encompassing the Puget Sound could be regrouped to form two new regions and sites in three regions in south-central and southeastern Washington State could also be regrouped into two new regions. Concentrations for 7 of 11 total-recoverable metals correlated with total metals concentrations. Concen- trations of six total metals also correlated positively with organic carbon. Total-recoverable metals concentrations did not correlate with either organic carbon or particle size. Concentrations of metals determined by the leaching methods did not correlate with total or total-recoverable metals concentrations, nor did they correlate with organic carbon or particle size.

  2. [Sequential sampling plans to Orthezia praelonga Douglas (Hemiptera: Sternorrhyncha, Ortheziidae) in citrus].

    PubMed

    Costa, Marilia G; Barbosa, José C; Yamamoto, Pedro T

    2007-01-01

    The sequential sampling is characterized by using samples of variable sizes, and has the advantage of reducing sampling time and costs if compared to fixed-size sampling. To introduce an adequate management for orthezia, sequential sampling plans were developed for orchards under low and high infestation. Data were collected in Matão, SP, in commercial stands of the orange variety 'Pêra Rio', at five, nine and 15 years of age. Twenty samplings were performed in the whole area of each stand by observing the presence or absence of scales on plants, being plots comprised of ten plants. After observing that in all of the three stands the scale population was distributed according to the contagious model, fitting the Negative Binomial Distribution in most samplings, two sequential sampling plans were constructed according to the Sequential Likelihood Ratio Test (SLRT). To construct these plans an economic threshold of 2% was adopted and the type I and II error probabilities were fixed in alpha = beta = 0.10. Results showed that the maximum numbers of samples expected to determine control need were 172 and 76 samples for stands with low and high infestation, respectively.

  3. Physical and chemical characteristics including total and geochemical forms of phosphorus in sediment from the top 30 centimeters of cores collected in October 2006 at 26 sites in Upper Klamath Lake, Oregon

    USGS Publications Warehouse

    Simon, Nancy S.; Ingle, Sarah N.

    2011-01-01

    μThis study of phosphorus (P) cycling in eutrophic Upper Klamath Lake (UKL), Oregon, was conducted by the U.S. Geological Survey in cooperation with the U.S. Bureau of Reclamation. Lakebed sediments from the upper 30 centimeters (cm) of cores collected from 26 sites were characterized. Cores were sampled at 0.5, 1.5, 2.5, 3.5, 4.5, 10, 15, 20, 25, and 30 cm. Prior to freezing, water content and sediment pH were determined. After being freeze-dried, all samples were separated into greater than 63-micron (μm) particle-size (coarse) and less than 63-μm particle-size (fine) fractions. In the surface samples (0.5 to 4.5 cm below the sediment water interface), approximately three-fourths of the particles were larger than 63-μm. The ratios of the coarse particle-size fraction (>63 μm) and the fine particle-size fraction (<63 μm) were approximately equal in samples at depths greater than 10 cm below the sediment water interface. Chemical analyses included both size fractions of freeze-dried samples. Chemical analyses included determination of total concentrations of aluminum (Al), calcium (Ca), carbon (C), iron (Fe), poorly crystalline Fe, nitrogen (N), P, and titanium (Ti). Total Fe concentrations were the largest in sediment from the northern portion of UKL, Howard Bay, and the southern portion of the lake. Concentrations of total Al, Ca, and Ti were largest in sediment from the northern, central, and southernmost portions of the lake and in sediment from Howard Bay. Concentrations of total C and N were largest in sediment from the embayments and in sediment from the northern arm and southern portion of the lake in the general region of Buck Island. Concentrations of total C were larger in the greater than 63-μm particle-size fraction than in the less than 63-μm particle-size fraction. Sediments were sequentially extracted to determine concentrations of inorganic forms of P, including loosely sorbed P, P associated with poorly crystalline Fe oxides, and P associated with mineral phases. The difference between the concentration of total P and sum of the concentrations of inorganic forms of P is referred to as residual P. Residual P was the largest fraction of P in all of the sediment samples. In UKL, the correlation between concentrations of total P and total Fe in sediment is poor (R2<0.1). The correlation between the concentrations of total P and P associated with poorly crystalline Fe oxides is good (R2=0.43) in surface sediment (0.5-4.5 cm below the sediment water interface) but poor (R2<0.1) in sediments at depths between 10 cm and 30 cm. Phosphorus associated with poorly crystalline Fe oxides is considered bioavailable because it is released when sediment conditions change from oxidizing to reducing, which causes dissolution of Fe oxides.

  4. The sample handling system for the Mars Icebreaker Life mission: from dirt to data.

    PubMed

    Davé, Arwen; Thompson, Sarah J; McKay, Christopher P; Stoker, Carol R; Zacny, Kris; Paulsen, Gale; Mellerowicz, Bolek; Glass, Brian J; Willson, David; Bonaccorsi, Rosalba; Rask, Jon

    2013-04-01

    The Mars Icebreaker Life mission will search for subsurface life on Mars. It consists of three payload elements: a drill to retrieve soil samples from approximately 1 m below the surface, a robotic sample handling system to deliver the sample from the drill to the instruments, and the instruments themselves. This paper will discuss the robotic sample handling system. Collecting samples from ice-rich soils on Mars in search of life presents two challenges: protection of that icy soil--considered a "special region" with respect to planetary protection--from contamination from Earth, and delivery of the icy, sticky soil to spacecraft instruments. We present a sampling device that meets these challenges. We built a prototype system and tested it at martian pressure, drilling into ice-cemented soil, collecting cuttings, and transferring them to the inlet port of the SOLID2 life-detection instrument. The tests successfully demonstrated that the Icebreaker drill, sample handling system, and life-detection instrument can collectively operate in these conditions and produce science data that can be delivered via telemetry--from dirt to data. Our results also demonstrate the feasibility of using an air gap to prevent forward contamination. We define a set of six analog soils for testing over a range of soil cohesion, from loose sand to basalt soil, with angles of repose of 27° and 39°, respectively. Particle size is a key determinant of jamming of mechanical parts by soil particles. Jamming occurs when the clearance between moving parts is equal in size to the most common particle size or equal to three of these particles together. Three particles acting together tend to form bridges and lead to clogging. Our experiments show that rotary-hammer action of the Icebreaker drill influences the particle size, typically reducing particle size by ≈ 100 μm.

  5. Analyzing forensic evidence based on density with magnetic levitation.

    PubMed

    Lockett, Matthew R; Mirica, Katherine A; Mace, Charles R; Blackledge, Robert D; Whitesides, George M

    2013-01-01

    This paper describes a method for determining the density of contact trace objects with magnetic levitation (MagLev). MagLev measurements accurately determine the density (± 0.0002 g/cm(3) ) of a diamagnetic object and are compatible with objects that are nonuniform in shape and size. The MagLev device (composed of two permanent magnets with like poles facing) and the method described provide a means of accurately determining the density of trace objects. This method is inexpensive, rapid, and verifiable and provides numerical values--independent of the specific apparatus or analyst--that correspond to the absolute density of the sample that may be entered into a searchable database. We discuss the feasibility of MagLev as a possible means of characterizing forensic-related evidence and demonstrate the ability of MagLev to (i) determine the density of samples of glitter and gunpowder, (ii) separate glitter particles of different densities, and (iii) determine the density of a glitter sample that was removed from a complex sample matrix. © 2012 American Academy of Forensic Sciences.

  6. Quantification of Spore-forming Bacteria Carried by Dust Particles

    NASA Technical Reports Server (NTRS)

    Lin, Ying; Cholakian, Tanya; Gao, Wenming; Osman, Shariff; Barengoltz, Jack

    2006-01-01

    In order to establish a biological contamination transport model for predicting the cross contamination risk during spacecraft assembly and upon landing on Mars, it is important to understand the relationship between spore-forming bacteria and their carrier particles. We conducted air and surface sampling in indoor, outdoor, and cleanroom environments to determine the ratio of spore forming bacteria to their dust particle carriers of different sizes. The number of spore forming bacteria was determined from various size groups of particles in a given environment. Our data also confirms the existence of multiple spores on a single particle and spore clumps. This study will help in developing a better bio-contamination transport model, which in turn will help in determining forward contamination risks for future missions.

  7. Non-parametric estimation of population size changes from the site frequency spectrum.

    PubMed

    Waltoft, Berit Lindum; Hobolth, Asger

    2018-06-11

    Changes in population size is a useful quantity for understanding the evolutionary history of a species. Genetic variation within a species can be summarized by the site frequency spectrum (SFS). For a sample of size n, the SFS is a vector of length n - 1 where entry i is the number of sites where the mutant base appears i times and the ancestral base appears n - i times. We present a new method, CubSFS, for estimating the changes in population size of a panmictic population from an observed SFS. First, we provide a straightforward proof for the expression of the expected site frequency spectrum depending only on the population size. Our derivation is based on an eigenvalue decomposition of the instantaneous coalescent rate matrix. Second, we solve the inverse problem of determining the changes in population size from an observed SFS. Our solution is based on a cubic spline for the population size. The cubic spline is determined by minimizing the weighted average of two terms, namely (i) the goodness of fit to the observed SFS, and (ii) a penalty term based on the smoothness of the changes. The weight is determined by cross-validation. The new method is validated on simulated demographic histories and applied on unfolded and folded SFS from 26 different human populations from the 1000 Genomes Project.

  8. Designing clinical trials to test disease-modifying agents: application to the treatment trials of Alzheimer's disease.

    PubMed

    Xiong, Chengjie; van Belle, Gerald; Miller, J Philip; Morris, John C

    2011-02-01

    Therapeutic trials of disease-modifying agents on Alzheimer's disease (AD) require novel designs and analyses involving switch of treatments for at least a portion of subjects enrolled. Randomized start and randomized withdrawal designs are two examples of such designs. Crucial design parameters such as sample size and the time of treatment switch are important to understand in designing such clinical trials. The purpose of this article is to provide methods to determine sample sizes and time of treatment switch as well as optimum statistical tests of treatment efficacy for clinical trials of disease-modifying agents on AD. A general linear mixed effects model is proposed to test the disease-modifying efficacy of novel therapeutic agents on AD. This model links the longitudinal growth from both the placebo arm and the treatment arm at the time of treatment switch for these in the delayed treatment arm or early withdrawal arm and incorporates the potential correlation on the rate of cognitive change before and after the treatment switch. Sample sizes and the optimum time for treatment switch of such trials as well as optimum test statistic for the treatment efficacy are determined according to the model. Assuming an evenly spaced longitudinal design over a fixed duration, the optimum treatment switching time in a randomized start or a randomized withdrawal trial is half way through the trial. With the optimum test statistic for the treatment efficacy and over a wide spectrum of model parameters, the optimum sample size allocations are fairly close to the simplest design with a sample size ratio of 1:1:1 among the treatment arm, the delayed treatment or early withdrawal arm, and the placebo arm. The application of the proposed methodology to AD provides evidence that much larger sample sizes are required to adequately power disease-modifying trials when compared with those for symptomatic agents, even when the treatment switch time and efficacy test are optimally chosen. The proposed method assumes that the only and immediate effect of treatment switch is on the rate of cognitive change. Crucial design parameters for the clinical trials of disease-modifying agents on AD can be optimally chosen. Government and industry officials as well as academia researchers should consider the optimum use of the clinical trials design for disease-modifying agents on AD in their effort to search for the treatments with the potential to modify the underlying pathophysiology of AD.

  9. Experimental design, power and sample size for animal reproduction experiments.

    PubMed

    Chapman, Phillip L; Seidel, George E

    2008-01-01

    The present paper concerns statistical issues in the design of animal reproduction experiments, with emphasis on the problems of sample size determination and power calculations. We include examples and non-technical discussions aimed at helping researchers avoid serious errors that may invalidate or seriously impair the validity of conclusions from experiments. Screen shots from interactive power calculation programs and basic SAS power calculation programs are presented to aid in understanding statistical power and computing power in some common experimental situations. Practical issues that are common to most statistical design problems are briefly discussed. These include one-sided hypothesis tests, power level criteria, equality of within-group variances, transformations of response variables to achieve variance equality, optimal specification of treatment group sizes, 'post hoc' power analysis and arguments for the increased use of confidence intervals in place of hypothesis tests.

  10. X-ray studies of aluminum alloy of the Al-Mg-Si system subjected to SPD processing

    NASA Astrophysics Data System (ADS)

    Sitdikov, V. D.; Murashkin, M. Yu; Khasanov, M. R.; Kasatkin, I. A.; Chizhov, P. S.; Bobruk, E. V.

    2014-08-01

    Recently it has been established that during high pressure torsion dynamic aging takes place in aluminum Al-Mg-Si alloys resulting in formation of nanosized particles of strengthening phases in the aluminum matrix, which greatly improves the electrical conductivity and strength properties. In the present paper structural characterization of ultrafine-grained (UFG) samples of aluminum 6201 alloy produced by severe plastic deformation (SPD) was performed using X-ray diffraction analysis. As a result, structure features (lattice parameter, size of coherent scattering domains) after dynamic aging of UFG samples were determined. The size and distribution of second- phase particles in the Al matrix were assessed with regard to HPT regimes. Impact of the size and distribution of the formed secondary phases on the strength, ductility and electrical conductivity is discussed.

  11. Modeling motor vehicle crashes using Poisson-gamma models: examining the effects of low sample mean values and small sample size on the estimation of the fixed dispersion parameter.

    PubMed

    Lord, Dominique

    2006-07-01

    There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum likelihood method. In an attempt to complement the outcome of the simulation study, Poisson-gamma models were fitted to crash data collected in Toronto, Ont. characterized by a low sample mean and small sample size. The study shows that a low sample mean combined with a small sample size can seriously affect the estimation of the dispersion parameter, no matter which estimator is used within the estimation process. The probability the dispersion parameter becomes unreliably estimated increases significantly as the sample mean and sample size decrease. Consequently, the results show that an unreliably estimated dispersion parameter can significantly undermine empirical Bayes (EB) estimates as well as the estimation of confidence intervals for the gamma mean and predicted response. The paper ends with recommendations about minimizing the likelihood of producing Poisson-gamma models with an unreliable dispersion parameter for modeling motor vehicle crashes.

  12. A Bayesian nonparametric method for prediction in EST analysis

    PubMed Central

    Lijoi, Antonio; Mena, Ramsés H; Prünster, Igor

    2007-01-01

    Background Expressed sequence tags (ESTs) analyses are a fundamental tool for gene identification in organisms. Given a preliminary EST sample from a certain library, several statistical prediction problems arise. In particular, it is of interest to estimate how many new genes can be detected in a future EST sample of given size and also to determine the gene discovery rate: these estimates represent the basis for deciding whether to proceed sequencing the library and, in case of a positive decision, a guideline for selecting the size of the new sample. Such information is also useful for establishing sequencing efficiency in experimental design and for measuring the degree of redundancy of an EST library. Results In this work we propose a Bayesian nonparametric approach for tackling statistical problems related to EST surveys. In particular, we provide estimates for: a) the coverage, defined as the proportion of unique genes in the library represented in the given sample of reads; b) the number of new unique genes to be observed in a future sample; c) the discovery rate of new genes as a function of the future sample size. The Bayesian nonparametric model we adopt conveys, in a statistically rigorous way, the available information into prediction. Our proposal has appealing properties over frequentist nonparametric methods, which become unstable when prediction is required for large future samples. EST libraries, previously studied with frequentist methods, are analyzed in detail. Conclusion The Bayesian nonparametric approach we undertake yields valuable tools for gene capture and prediction in EST libraries. The estimators we obtain do not feature the kind of drawbacks associated with frequentist estimators and are reliable for any size of the additional sample. PMID:17868445

  13. Sample preparation for the determination of 241Am in sediments utilizing gamma-spectroscopy.

    PubMed

    Ristic, M; Degetto, S; Ast, T; Cantallupi, C

    2002-01-01

    This paper describes a procedure developed to separate americium-241 from the bulk of a sample by coprecipitation followed by high sensitivity gamma-counting of the concentrate in a well-type detector. It enables the measurement of 241Am at low concentrations, e.g. fallout levels in soils and sediments, or where large sample sizes are not available. The method is much faster and more reliable than those involving separation from other alpha-emitters, electroplating and alpha-spectrometry. A number of tracer experiments was performed in order to optimize the conditions for coprecipitation of 241Am from sediment leachates. The general outline of the determination of americium is also given.

  14. Smectite clays of Serbia and their application in adsorption of organic dyes

    NASA Astrophysics Data System (ADS)

    Milošević, Maja; Logar, Mihovil

    2014-05-01

    Colorants and dyes are currently available in over a 100.000 different species and several biggest industries are using them daily in their manufacture processes (textile, cosmetics, food industry, etc.). Since colorants are easily dissoluble in water they pass through filter membranes without further decomposing and in that manner they end up in the environment. The main goal of this work is to apply certain methods in determining the suitability of individual clay in adsorbing and removing colorants from polluted waters. For this study we have chosen four different raw clays from three regions in Serbia: Svrljig (B), Bogovina (Bo) and Slatina-Ub (C and V) and as colorant - methylene blue dye (MB (MERCK, for analytical purposes)). Experiments where carried out to determine the sample structure (XRD and IR), grain size (granulometry), cationic exchange capacity (CEC via spectrophotometry using MB) and adsorption capabilities (spectrophotometry and fluorimetry using MB). XRD and IR data are showing that the samples are smectite clays where samples B i Bo are mainly montmorillonite while C and V are montmorillonite-illite clays. Granulometric distribution results indicate that samples B i Bo have smaller grain size, less that 1μ (over 60%) whereas the samples C and V are more coarse grained (40% over 20μ). This grain distribution is affecting their specific surface area in the manner that those coarse grained samples have smaller specific surface area. Cationic exchange capacity determined with methylene blue indicate that montmorillonite samples have larger CEC (B = 37 meq/100g, Bo = 50 meq/100g) and montmorillonite-illite samples smaller CEC (V = 5 meq/100g, V = 3 meq/100g). Fluorimetry measurement results gave us a clear distinction between those with higher and smaller adsorption capability. Montmorillonite samples (B and Bo) with higher CEC values and smaller grain size are adsorbing large amounts of methylene blue witch is visible by absence of fluorimetric band corresponding to methylene blue. Montmorillonite-illite samples with smaller CEC values and coarser grain size are adsorbing very small amounts of methylene blue from the suspension which is visible by appearance of the methylene blue band. Untreated, raw smectite clays of Serbia are efficient adsorbent material for removal of dyes from polluted waters. Samples from two regions especially, Bogovina and Svrljig, are showing favorable adsorption results and they are representing good raw materials for purification of waste-waters containing dyes. References: - Jović-Jovičić, N., Milutinović-Nikolić, A., Gržetić, I., Jovanović, D.; Organobentonite as efficient textile dye sorbent; Chem. Eng. Technol. 2008, 31, No. 4, 567-574 - Žunić, M.J., Milutinović-Nikolić, A.D., Jović-Jovičić, N.P., Banković, P.T., Mojović, Z.D., Manojlović, D.D., Jovanović, D.M.; Modified bentonite as adsorbent and catalyst for purification of wastewaters containing dyes; Hem. ind. 2010, 64 ,No. 3, 193-199

  15. Marine sources of ice nucleating particles: results from phytoplankton cultures and samples collected at sea

    NASA Astrophysics Data System (ADS)

    Wilbourn, E.; Thornton, D.; Brooks, S. D.; Graff, J.

    2016-12-01

    The role of marine aerosols as ice nucleating particles is currently poorly understood. Despite growing interest, there are remarkably few ice nucleation measurements on representative marine samples. Here we present results of heterogeneous ice nucleation from laboratory studies and in-situ air and sea water samples collected during NAAMES (North Atlantic Aerosol and Marine Ecosystems Study). Thalassiosira weissflogii (CCMP 1051) was grown under controlled conditions in batch cultures and the ice nucleating activity depended on the growth phase of the cultures. Immersion freezing temperatures of the lab-grown diatoms were determined daily using a custom ice nucleation apparatus cooled at a set rate. Our results show that the age of the culture had a significant impact on ice nucleation temperature, with samples in stationary phase causing nucleation at -19.9 °C, approximately nine degrees warmer than the freezing temperature during exponential growth phase. Field samples gathered during the NAAMES II cruise in May 2016 were also tested for ice nucleating ability. Two types of samples were gathered. Firstly, whole cells were fractionated by size from surface seawater using a BD Biosciences Influx Cell Sorter (BD BS ISD). Secondly, aerosols were generated using the SeaSweep and subsequently size-selected using a PIXE Cascade Impactor. Samples were tested for the presence of ice nucleating particles (INP) using the technique described above. There were significant differences in the freezing temperature of the different samples; of the three sample types the lab-grown cultures tested during stationary phase froze at the warmest temperatures, followed by the SeaSweep samples (-25.6 °C) and the size-fractionated cell samples (-31.3 °C). Differences in ice nucleation ability may be due to size differences between the INP, differences in chemical composition of the sample, or some combination of these two factors. Results will be presented and atmospheric implications discussed.

  16. Home and School Environments as Determinant of Social Skills Deficit among Learners with Intellectual Disability in Lagos State

    ERIC Educational Resources Information Center

    Isawumi, Oyeyinka David; Oyundoyin, John Olusegun

    2016-01-01

    The study examined home and school environmental factors as determinant of social skills deficit among learners with intellectual disability in Lagos State, Nigeria. The study adopted survey research method using a sample size of fifty (50) pupils with intellectual disability who were purposively selected from five special primary schools in Lagos…

  17. The effects of neutralized particles on the sampling efficiency of polyurethane foam used to estimate the extrathoracic deposition fraction.

    PubMed

    Tomyn, Ronald L; Sleeth, Darrah K; Thiese, Matthew S; Larson, Rodney R

    2016-01-01

    In addition to chemical composition, the site of deposition of inhaled particles is important for determining the potential health effects from an exposure. As a result, the International Organization for Standardization adopted a particle deposition sampling convention. This includes extrathoracic particle deposition sampling conventions for the anterior nasal passages (ET1) and the posterior nasal and oral passages (ET2). This study assessed how well a polyurethane foam insert placed in an Institute of Occupational Medicine (IOM) sampler can match an extrathoracic deposition sampling convention, while accounting for possible static buildup in the test particles. In this way, the study aimed to assess whether neutralized particles affected the performance of this sampler for estimating extrathoracic particle deposition. A total of three different particle sizes (4.9, 9.5, and 12.8 µm) were used. For each trial, one particle size was introduced into a low-speed wind tunnel with a wind speed set a 0.2 m/s (∼40 ft/min). This wind speed was chosen to closely match the conditions of most indoor working environments. Each particle size was tested twice either neutralized, using a high voltage neutralizer, or left in its normal (non neutralized) state as standard particles. IOM samplers were fitted with a polyurethane foam insert and placed on a rotating mannequin inside the wind tunnel. Foam sampling efficiencies were calculated for all trials to compare against the normalized ET1 sampling deposition convention. The foam sampling efficiencies matched well to the ET1 deposition convention for the larger particle sizes, but had a general trend of underestimating for all three particle sizes. The results of a Wilcoxon Rank Sum Test also showed that only at 4.9 µm was there a statistically significant difference (p-value = 0.03) between the foam sampling efficiency using the standard particles and the neutralized particles. This is interpreted to mean that static buildup may be occurring and neutralizing the particles that are 4.9 µm diameter in size did affect the performance of the foam sampler when estimating extrathoracic particle deposition.

  18. Effects of storage time and temperature on pH, specific gravity, and crystal formation in urine samples from dogs and cats.

    PubMed

    Albasan, Hasan; Lulich, Jody P; Osborne, Carl A; Lekcharoensuk, Chalermpol; Ulrich, Lisa K; Carpenter, Kathleen A

    2003-01-15

    To determine effects of storage temperature and time on pH and specific gravity of and number and size of crystals in urine samples from dogs and cats. Randomized complete block design. 31 dogs and 8 cats. Aliquots of each urine sample were analyzed within 60 minutes of collection or after storage at room or refrigeration temperatures (20 vs 6 degrees C [68 vs 43 degrees F]) for 6 or 24 hours. Crystals formed in samples from 11 of 39 (28%) animals. Calcium oxalate (CaOx) crystals formed in vitro in samples from 1 cat and 8 dogs. Magnesium ammonium phosphate (MAP) crystals formed in vitro in samples from 2 dogs. Compared with aliquots stored at room temperature, refrigeration increased the number and size of crystals that formed in vitro; however, the increase in number and size of MAP crystals in stored urine samples was not significant. Increased storage time and decreased storage temperature were associated with a significant increase in number of CaOx crystals formed. Greater numbers of crystals formed in urine aliquots stored for 24 hours than in aliquots stored for 6 hours. Storage time and temperature did not have a significant effect on pH or specific gravity. Urine samples should be analyzed within 60 minutes of collection to minimize temperature- and time-dependent effects on in vitro crystal formation. Presence of crystals observed in stored samples should be validated by reevaluation of fresh urine.

  19. Calibrating the Ordovician Radiation of marine life: implications for Phanerozoic diversity trends

    NASA Technical Reports Server (NTRS)

    Miller, A. I.; Foote, M.

    1996-01-01

    It has long been suspected that trends in global marine biodiversity calibrated for the Phanerozoic may be affected by sampling problems. However, this possibility has not been evaluated definitively, and raw diversity trends are generally accepted at face value in macroevolutionary investigations. Here, we analyze a global-scale sample of fossil occurrences that allows us to determine directly the effects of sample size on the calibration of what is generally thought to be among the most significant global biodiversity increases in the history of life: the Ordovician Radiation. Utilizing a composite database that includes trilobites, brachiopods, and three classes of molluscs, we conduct rarefaction analyses to demonstrate that the diversification trajectory for the Radiation was considerably different than suggested by raw diversity time-series. Our analyses suggest that a substantial portion of the increase recognized in raw diversity depictions for the last three Ordovician epochs (the Llandeilian, Caradocian, and Ashgillian) is a consequence of increased sample size of the preserved and catalogued fossil record. We also use biometric data for a global sample of Ordovician trilobites, along with methods of measuring morphological diversity that are not biased by sample size, to show that morphological diversification in this major clade had leveled off by the Llanvirnian. The discordance between raw diversity depictions and more robust taxonomic and morphological diversity metrics suggests that sampling effects may strongly influence our perception of biodiversity trends throughout the Phanerozoic.

  20. Total-body creatine pool size and skeletal muscle mass determination by creatine-(methyl-D3) dilution in rats.

    PubMed

    Stimpson, Stephen A; Turner, Scott M; Clifton, Lisa G; Poole, James C; Mohammed, Hussein A; Shearer, Todd W; Waitt, Greg M; Hagerty, Laura L; Remlinger, Katja S; Hellerstein, Marc K; Evans, William J

    2012-06-01

    There is currently no direct, facile method to determine total-body skeletal muscle mass for the diagnosis and treatment of skeletal muscle wasting conditions such as sarcopenia, cachexia, and disuse. We tested in rats the hypothesis that the enrichment of creatinine-(methyl-d(3)) (D(3)-creatinine) in urine after a defined oral tracer dose of D(3)-creatine can be used to determine creatine pool size and skeletal muscle mass. We determined 1) an oral tracer dose of D(3)-creatine that was completely bioavailable with minimal urinary spillage and sufficient enrichment in the body creatine pool for detection of D(3)-creatine in muscle and D(3)-creatinine in urine, and 2) the time to isotopic steady state. We used cross-sectional studies to compare total creatine pool size determined by the D(3)-creatine dilution method to lean body mass determined by independent methods. The tracer dose of D(3)-creatine (<1 mg/rat) was >99% bioavailable with 0.2-1.2% urinary spillage. Isotopic steady state was achieved within 24-48 h. Creatine pool size calculated from urinary D(3)-creatinine enrichment at 72 h significantly increased with muscle accrual in rat growth, significantly decreased with dexamethasone-induced skeletal muscle atrophy, was correlated with lean body mass (r = 0.9590; P < 0.0001), and corresponded to predicted total muscle mass. Total-body creatine pool size and skeletal muscle mass can thus be accurately and precisely determined by an orally delivered dose of D(3)-creatine followed by the measurement of D(3)-creatinine enrichment in a single urine sample and is promising as a noninvasive tool for the clinical determination of skeletal muscle mass.

  1. Ratios of total suspended solids to suspended sediment concentrations by particle size

    USGS Publications Warehouse

    Selbig, W.R.; Bannerman, R.T.

    2011-01-01

    Wet-sieving sand-sized particles from a whole storm-water sample before splitting the sample into laboratory-prepared containers can reduce bias and improve the precision of suspended-sediment concentrations (SSC). Wet-sieving, however, may alter concentrations of total suspended solids (TSS) because the analytical method used to determine TSS may not have included the sediment retained on the sieves. Measuring TSS is still commonly used by environmental managers as a regulatory metric for solids in storm water. For this reason, a new method of correlating concentrations of TSS and SSC by particle size was used to develop a series of correction factors for SSC as a means to estimate TSS. In general, differences between TSS and SSC increased with greater particle size and higher sand content. Median correction factors to SSC ranged from 0.29 for particles larger than 500m to 0.85 for particles measuring from 32 to 63m. Great variability was observed in each fraction-a result of varying amounts of organic matter in the samples. Wide variability in organic content could reduce the transferability of the correction factors. ?? 2011 American Society of Civil Engineers.

  2. Distribution of trace elements in selected pulverized coals as a function of particle size and density

    USGS Publications Warehouse

    Senior, C.L.; Zeng, T.; Che, J.; Ames, M.R.; Sarofim, A.F.; Olmez, I.; Huggins, Frank E.; Shah, N.; Huffman, G.P.; Kolker, A.; Mroczkowski, S.; Palmer, C.; Finkelman, R.

    2000-01-01

    Trace elements in coal have diverse modes of occurrence that will greatly influence their behavior in many coal utilization processes. Mode of occurrence is important in determining the partitioning during coal cleaning by conventional processes, the susceptibility to oxidation upon exposure to air, as well as the changes in physical properties upon heating. In this study, three complementary methods were used to determine the concentrations and chemical states of trace elements in pulverized samples of four US coals: Pittsburgh, Illinois No. 6, Elkhorn and Hazard, and Wyodak coals. Neutron Activation Analysis (NAA) was used to measure the absolute concentration of elements in the parent coals and in the size- and density-fractionated samples. Chemical leaching and X-ray absorption fine structure (XAFS) spectroscopy were used to provide information on the form of occurrence of an element in the parent coals. The composition differences between size-segregated coal samples of different density mainly reflect the large density difference between minerals, especially pyrite, and the organic portion of the coal. The heavy density fractions are therefore enriched in pyrite and the elements associated with pyrite, as also shown by the leaching and XAFS methods. Nearly all the As is associated with pyrite in the three bituminous coals studied. The sub-bituminous coal has a very low content of pyrite and arsenic; in this coal arsenic appears to be primarily organically associated. Selenium is mainly associated with pyrite in the bituminous coal samples. In two bituminous coal samples, zinc is mostly in the form of ZnS or associated with pyrite, whereas it appears to be associated with other minerals in the other two coals. Zinc is also the only trace element studied that is significantly more concentrated in the smaller (45 to 63 ??m) coal particles.

  3. Biomechanical behavior of bone scaffolds made of additive manufactured tricalciumphosphate and titanium alloy under different loading conditions.

    PubMed

    Wieding, Jan; Fritsche, Andreas; Heinl, Peter; Körner, Carolin; Cornelsen, Matthias; Seitz, Hermann; Mittelmeier, Wolfram; Bader, Rainer

    2013-12-16

    The repair of large segmental bone defects caused by fracture, tumor or infection remains challenging in orthopedic surgery. The capability of two different bone scaffold materials, sintered tricalciumphosphate and a titanium alloy (Ti6Al4V), were determined by mechanical and biomechanical testing. All scaffolds were fabricated by means of additive manufacturing techniques with identical design and controlled pore geometry. Small-sized sintered TCP scaffolds (10 mm diameter, 21 mm length) were fabricated as dense and open-porous samples and tested in an axial loading procedure. Material properties for titanium alloy were determined by using both tensile (dense) and compressive test samples (open-porous). Furthermore, large-sized open-porous TCP and titanium alloy scaffolds (30 mm in height and diameter, 700 µm pore size) were tested in a biomechanical setup simulating a large segmental bone defect using a composite femur stabilized with an osteosynthesis plate. Static physiologic loads (1.9 kN) were applied within these tests. Ultimate compressive strength of the TCP samples was 11.2 ± 0.7 MPa and 2.2 ± 0.3 MPa, respectively, for the dense and the open-porous samples. Tensile strength and ultimate compressive strength was 909.8 ± 4.9 MPa and 183.3 ± 3.7 MPa, respectively, for the dense and the open-porous titanium alloy samples. Furthermore, the biomechanical results showed good mechanical stability for the titanium alloy scaffolds. TCP scaffolds failed at 30% of the maximum load. Based on recent data, the 3D printed TCP scaffolds tested cannot currently be recommended for high load-bearing situations. Scaffolds made of titanium could be optimized by adapting the biomechanical requirements.

  4. Nurses' Emotional Intelligence Impact on the Quality of Hospital Services

    PubMed Central

    Ranjbar Ezzatabadi, Mohammad; Bahrami, Mohammad Amin; Hadizadeh, Farzaneh; Arab, Masoomeh; Nasiri, Soheyla; Amiresmaili, Mohammadreza; Ahmadi Tehrani, Gholamreza

    2012-01-01

    Background Emotional intelligence is the potential to feel, use, communicate, recognize, remember, describe, identify, learn from, manage, understand and explain emotions. Service quality also can be defined as the post-consumption assessment of the services by consumers that are determined by many variables. Objectives This study was aimed to determine the nurses’ emotional intelligence impact on the delivered services quality. Materials and Methods This descriptive - applied study was carried out through a cross-sectional method in 2010. The research had 2 populations comprising of patients admitted to three academic hospitals of Yazd and the hospital nurses. Sample size was calculated by sample size formula for unlimited (patients) and limited (nursing staff) populations and obtained with stratified- random method. The data was collected by 4 valid questionnaires. Results The results of study indicated that nurses' emotional intelligence has a direct effect on the hospital services quality. The study also revealed that nurse's job satisfaction and communication skills have an intermediate role in the emotional intelligence and service quality relation. Conclusions This paper reports a new determinant of hospital services quality. PMID:23482866

  5. Catch of channel catfish with tandem-set hoop nets and gill nets in lentic systems of Nebraska

    USGS Publications Warehouse

    Richters, Lindsey K.; Pope, Kevin L.

    2011-01-01

    Twenty-six Nebraska water bodies representing two ecosystem types (small standing waters and large standing waters) were surveyed during 2008 and 2009 with tandem-set hoop nets and experimental gill nets to determine if similar trends existed in catch rates and size structures of channel catfish Ictalurus punctatus captured with these gears. Gear efficiency was assessed as the number of sets (nets) that would be required to capture 100 channel catfish given observed catch per unit effort (CPUE). Efficiency of gill nets was not correlated with efficiency of hoop nets for capturing channel catfish. Small sample sizes prohibited estimation of proportional size distributions in most surveys; in the four surveys for which sample size was sufficient to quantify length-frequency distributions of captured channel catfish, distributions differed between gears. The CPUE of channel catfish did not differ between small and large water bodies for either gear. While catch rates of hoop nets were lower than rates recorded in previous studies, this gear was more efficient than gill nets at capturing channel catfish. However, comparisons of size structure between gears may be problematic.

  6. Using multi-frequency acoustic attenuation to monitor grain size and concentration of suspended sediment in rivers.

    PubMed

    Moore, S A; Le Coz, J; Hurther, D; Paquier, A

    2013-04-01

    Multi-frequency acoustic backscatter profiles recorded with side-looking acoustic Doppler current profilers are used to monitor the concentration and size of sedimentary particles suspended in fluvial environments. Data at 300, 600, and 1200 kHz are presented from the Isère River in France where the dominant particles in suspension are silt and clay sizes. The contribution of suspended sediment to the through-water attenuation was determined for three high concentration (> 100 mg/L) events and compared to theoretical values for spherical particles having size distributions that were measured by laser diffraction in water samples. Agreement was good for the 300 kHz data, but it worsened with increasing frequency. A method for the determination of grain size using multi-frequency attenuation data is presented considering models for spherical and oblate spheroidal particles. When the resulting size estimates are used to convert sediment attenuation to concentration, the spheroidal model provides the best agreement with optical estimates of concentration, but the aspect ratio and grain size that provide the best fit differ between events. The acoustic estimates of size were one-third the values from laser grain sizing. This agreement is encouraging considering optical and acoustical instruments measure different parameters.

  7. Influence of the different carbon nanotubes on the development of electrochemical sensors for bisphenol A.

    PubMed

    Goulart, Lorena Athie; de Moraes, Fernando Cruz; Mascaro, Lucia Helena

    2016-01-01

    Different methods of functionalisation and the influence of the multi-walled carbon nanotube sizes were investigated on the bisphenol A electrochemical determination. Samples with diameters of 20 to 170 nmwere functionalized in HNO3 5.0 mol L(-1) and a concentrated sulphonitric solution. The morphological characterisations before and after acid treatment were carried out by scanning electron microscopy and cyclic voltammetry. The size and acid treatment affected the oxidation of bisphenol A. The multi-walled carbon nanotubes with a 20-40 nm diameter improved the method sensitivity and achieved a detection limit for determination of bisphenol A at 84.0 nmol L(-1).

  8. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    PubMed

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  9. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  10. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  11. Freeze-cast alumina pore networks: Effects of freezing conditions and dispersion medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, S. M.; Xiao, X.; Faber, K. T.

    Alumina ceramics were freeze-cast from water- and camphene-based slurries under varying freezing conditions and examined using X-ray computed tomography (XCT). Pore network characteristics, i.e., porosity, pore size, geometric surface area, and tortuosity, were measured from XCT reconstructions and the data were used to develop a model to predict feature size from processing conditions. Classical solidification theory was used to examine relationships between pore size, temperature gradients, and freezing front velocity. Freezing front velocity was subsequently predicted from casting conditions via the two-phase Stefan problem. Resulting models for water-based samples agreed with solidification-based theories predicting lamellar spacing of binary eutectic alloys,more » and models for camphene-based samples concurred with those for dendritic growth. Relationships between freezing conditions and geometric surface area were also modeled by considering the inverse relationship between pore size and surface area. Tortuosity was determined to be dependent primarily on the type of dispersion medium. (C) 2015 Elsevier Ltd. All rights reserved.« less

  12. Particle Size Distribution of Heavy Metals and Magnetic Susceptibility in an Industrial Site.

    PubMed

    Ayoubi, Shamsollah; Soltani, Zeynab; Khademi, Hossein

    2018-05-01

    This study was conducted to explore the relationships between magnetic susceptibility and some soil heavy metals concentrations in various particle sizes in an industrial site, central Iran. Soils were partitioned into five fractions (< 28, 28-75, 75-150, 150-300, and 300-2000 µm). Heavy metals concentrations including Zn, Pb, Fe, Cu, Ni and Mn and magnetic susceptibility were determined in bulk soil samples and all fractions in 60 soil samples collected from the depth of 0-5 cm. The studied heavy metals except for Pb and Fe displayed a substantial enrichment in the < 28 µm. These two elements seemed to be independent of the selected size fractions. Magnetic minerals are specially linked with medium size fractions including 28-75, 75-150 and 150-300 µm. The highest correlations were found for < 28 µm and heavy metals followed by 150-300 µm fraction which are susceptible to wind erosion risk in an arid environment.

  13. A thermal emission spectral library of rock-forming minerals

    NASA Astrophysics Data System (ADS)

    Christensen, Philip R.; Bandfield, Joshua L.; Hamilton, Victoria E.; Howard, Douglas A.; Lane, Melissa D.; Piatek, Jennifer L.; Ruff, Steven W.; Stefanov, William L.

    2000-04-01

    A library of thermal infrared spectra of silicate, carbonate, sulfate, phosphate, halide, and oxide minerals has been prepared for comparison to spectra obtained from planetary and Earth-orbiting spacecraft, airborne instruments, and laboratory measurements. The emphasis in developing this library has been to obtain pure samples of specific minerals. All samples were hand processed and analyzed for composition and purity. The majority are 710-1000 μm particle size fractions, chosen to minimize particle size effects. Spectral acquisition follows a method described previously, and emissivity is determined to within 2% in most cases. Each mineral spectrum is accompanied by descriptive information in database form including compositional information, sample quality, and a comments field to describe special circumstances and unique conditions. More than 150 samples were selected to include the common rock-forming minerals with an emphasis on igneous and sedimentary minerals. This library is available in digital form and will be expanded as new, well-characterized samples are acquired.

  14. Instrumental neutron activation analysis for studying size-fractionated aerosols

    NASA Astrophysics Data System (ADS)

    Salma, Imre; Zemplén-Papp, Éva

    1999-10-01

    Instrumental neutron activation analysis (INAA) was utilized for studying aerosol samples collected into a coarse and a fine size fraction on Nuclepore polycarbonate membrane filters. As a result of the panoramic INAA, 49 elements were determined in an amount of about 200-400 μg of particulate matter by two irradiations and four γ-spectrometric measurements. The analytical calculations were performed by the absolute ( k0) standardization method. The calibration procedures, application protocol and the data evaluation process are described and discussed. They make it possible now to analyse a considerable number of samples, with assuring the quality of the results. As a means of demonstrating the system's analytical capabilities, the concentration ranges, median or mean atmospheric concentrations and detection limits are presented for an extensive series of aerosol samples collected within the framework of an urban air pollution study in Budapest. For most elements, the precision of the analysis was found to be beyond the uncertainty represented by the sampling techniques and sample variability.

  15. HPLC column-switching technique for sample preparation and fluorescence determination of propranolol in urine using fused-core columns in both dimensions.

    PubMed

    Satínský, Dalibor; Havlíková, Lucie; Solich, Petr

    2013-08-01

    A new and fast high-performance liquid chromatography (HPLC) column-switching method using fused-core columns in both dimensions for sample preconcentration and determination of propranolol in human urine has been developed. On-line sample pretreatment and propranolol preconcentration were performed on an Ascentis Express RP-C-18 guard column (5 × 4.6 mm), particle size, 2.7 μm, with mobile phase acetonitrile/water (5:95, v/v) at a flow rate of 2.0 mL min(-1) and at a temperature of 50 °C. Valve switch from pretreatment column to analytical column was set at 4.0 min in a back-flush mode. Separation of propranolol from other endogenous urine compounds was achieved on the fused-core column Ascentis Express RP-Amide (100 × 4.6 mm), particle size, 2.7 μm, with mobile phase acetonitrile/water solution of 0.5% triethylamine, pH adjusted to 4.5 by means of glacial acetic acid (25:75, v/v), at a flow rate of 1.0 mL min(-1) and at a temperature of 50 °C. Fluorescence excitation/emission detection wavelengths were set at 229/338 nm. A volume of 1,500 μL of filtered urine sample solution was injected directly into the column-switching HPLC system. The total analysis time including on-line sample pretreatment was less than 8 min. The experimentally determined limit of detection of the method was found to be 0.015 ng mL(-1).

  16. 7 CFR 51.2341 - Sample size for grade determination.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 51.2341 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946 AND THE EGG PRODUCTS INSPECTION ACT FRESH FRUITS, VEGETABLES AND OTHER...

  17. 7 CFR 51.2341 - Sample size for grade determination.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 51.2341 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946 FRESH FRUITS, VEGETABLES AND OTHER PRODUCTS 1,2 (INSPECTION, CERTIFICATION...

  18. Metallographic Characterization of Wrought Depleted Uranium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forsyth, Robert Thomas; Hill, Mary Ann

    Metallographic characterization was performed on wrought depleted uranium (DU) samples taken from the longitudinal and transverse orientations from specific locations on two specimens. Characterization of the samples included general microstructure, inclusion analysis, grain size analysis, and microhardness testing. Comparisons of the characterization results were made to determine any differences based on specimen, sample orientation, or sample location. In addition, the characterization results for the wrought DU samples were also compared with data obtained from the metallographic characterization of cast DU samples previously characterized. No differences were observed in microstructure, inclusion size, morphology, and distribution, or grain size in regard tomore » specimen, location, or orientation for the wrought depleted uranium samples. However, a small difference was observed in average hardness with regard to orientation at the same locations within the same specimen. The longitudinal samples were slightly harder than the transverse samples from the same location of the same specimen. This was true for both wrought DU specimens. Comparing the wrought DU sample data with the previously characterized cast DU sample data, distinct differences in microstructure, inclusion size, morphology and distribution, grain size, and microhardness were observed. As expected, the microstructure of the wrought DU samples consisted of small recrystallized grains which were uniform, randomly oriented, and equiaxed with minimal twinning observed in only a few grains. In contrast, the cast DU microstructure consisted of large irregularly shaped grains with extensive twinning observed in most grains. Inclusions in the wrought DU samples were elongated, broken and cracked and light and dark phases were observed in some inclusions. The mean inclusion area percentage for the wrought DU samples ranged from 0.08% to 0.34% and the average density from all wrought DU samples was 1.62E+04/cm 2. Inclusions in the cast DU samples were equiaxed and intact with light and dark phases observed in some inclusions. The mean inclusion area percentage for the cast DU samples ranged from 0.93% to 1.00% and the average density from all wrought DU samples was 2.83E+04/cm 2. The average mean grain area from all wrought DU samples was 141 μm 2 while the average mean grain area from all cast DU samples was 1.7 mm2. The average Knoop microhardness from all wrought DU samples was 215 HK and the average Knoop microhardness from all cast DU samples was 264 HK.« less

  19. Generation and Characterization of Nanoaerosols Using a Portable Scanning Mobility Particle Sizer and Electron Microscopy

    NASA Astrophysics Data System (ADS)

    Marty, Adam J.

    The purpose of this research is to demonstrate the ability to generate and characterize a nanometer sized aerosol using solutions, suspensions, and a bulk nanopowder, and to research the viability of using an acoustic dry aerosol generator/elutriator (ADAGE) to aerosolize a bulk nanopowder into a nanometer sized aerosol. The research compares the results from a portable scanning mobility particle sizer (SMPS) to the more traditional method of counting and sizing particles on a filter sample using scanning electron microscopy (SEM). Sodium chloride aerosol was used for the comparisons. The sputter coating thickness, a conductive coating necessary for SEM, was measured on different sizes of polystyrene latex spheres (PSLS). Aluminum oxide powder was aerosolized using an ADAGE and several different support membranes and sound frequency combinations were explored. A portable SMPS was used to determine the size distributions of the generated aerosols. Polycarbonate membrane (PCM) filter samples were collected for subsequent SEM analysis. The particle size distributions were determined from photographs of the membrane filters. SMPS data and membrane samples were collected simultaneously. The sputter coating thicknesses on four different sizes of PSLS, range 57 nanometers (nm) to 220 nm, were measured using transmission electron microscopy and the results from the SEM and SMPS were compared after accounting for the sputter coating thickness. Aluminum oxide nanopowder (20 nm) was aerosolized using a modified ADAGE technique. Four different support membranes and four different sound frequencies were tested with the ADAGE. The aerosol was collected onto PCM filters and the samples were examined using SEM. The results indicate that the SMPS and SEM distributions were log-normally distributed with a median diameter of approximately 42 nm and 55 nm, respectively, and geometric standard deviations (GSD) of approximately 1.6 and 1.7, respectively. The two methods yielded similar distributional trends with a difference in median diameters of approximately 11 -- 15 nm. The sputter coating thickness on the different sizes of PSLSs ranged from 15.4 -- 17.4 nm. The aerosols generated, using the modified ADAGE, were low in concentration. The particles remained as agglomerates and varied widely in size. An aluminum foil support membrane coupled with a high sound frequency generated the smallest agglomerates. A well characterized sodium chloride aerosol was generated and was reproducible. The distributions determined using SEM were slightly larger than those obtained from SMPS, however, the distributions had relatively the same shape as reflected in their GSDs. This suggests that a portable SMPS is a suitable method for characterizing a nanoaerosol. The sizing techniques could be compared after correcting for the effects of the sputter coating necessary for SEM examination. It was determined that the sputter coating thickness on nano-sized particles and particles up to approximately 220 nm can be expected to be the same and that the sputter coating can add considerably to the size of a nanoparticle. This has important implications for worker health where nanoaerosol exposure is a concern. The sputter coating must be considered when SEM is used to describe a nanoaerosol exposure. The performance of the modified ADAGE was less than expected. The low aerosol output from the ADAGE prevented a more detailed analysis and was limited to only a qualitative comparison. Some combinations of support membranes and sound frequencies performed better than others, particularly conductive support membranes and high sound frequencies. In conclusion, a portable SMPS yielded results similar to those obtained by SEM. The sputter coating was the same thickness on the PSLSs studied. The sputter coating thickness must be considered when characterizing nanoparticles using SEM. Finally, a conductive support membrane and higher frequencies appeared to generate the smallest agglomerates using the ADAGE technique.

  20. Dealing with non-unique and non-monotonic response in particle sizing instruments

    NASA Astrophysics Data System (ADS)

    Rosenberg, Phil

    2017-04-01

    A number of instruments used as de-facto standards for measuring particle size distributions are actually incapable of uniquely determining the size of an individual particle. This is due to non-unique or non-monotonic response functions. Optical particle counters have non monotonic response due to oscillations in the Mie response curves, especially for large aerosol and small cloud droplets. Scanning mobility particle sizers respond identically to two particles where the ratio of particle size to particle charge is approximately the same. Images of two differently sized cloud or precipitation particles taken by an optical array probe can have similar dimensions or shadowed area depending upon where they are in the imaging plane. A number of methods exist to deal with these issues, including assuming that positive and negative errors cancel, smoothing response curves, integrating regions in measurement space before conversion to size space and matrix inversion. Matrix inversion (also called kernel inversion) has the advantage that it determines the size distribution which best matches the observations, given specific information about the instrument (a matrix which specifies the probability that a particle of a given size will be measured in a given instrument size bin). In this way it maximises use of the information in the measurements. However this technique can be confused by poor counting statistics which can cause erroneous results and negative concentrations. Also an effective method for propagating uncertainties is yet to be published or routinely implemented. Her we present a new alternative which overcomes these issues. We use Bayesian methods to determine the probability that a given size distribution is correct given a set of instrument data and then we use Markov Chain Monte Carlo methods to sample this many dimensional probability distribution function to determine the expectation and (co)variances - hence providing a best guess and an uncertainty for the size distribution which includes contributions from the non-unique response curve, counting statistics and can propagate calibration uncertainties.

  1. Sample size calculations for the design of cluster randomized trials: A summary of methodology.

    PubMed

    Gao, Fei; Earnest, Arul; Matchar, David B; Campbell, Michael J; Machin, David

    2015-05-01

    Cluster randomized trial designs are growing in popularity in, for example, cardiovascular medicine research and other clinical areas and parallel statistical developments concerned with the design and analysis of these trials have been stimulated. Nevertheless, reviews suggest that design issues associated with cluster randomized trials are often poorly appreciated and there remain inadequacies in, for example, describing how the trial size is determined and the associated results are presented. In this paper, our aim is to provide pragmatic guidance for researchers on the methods of calculating sample sizes. We focus attention on designs with the primary purpose of comparing two interventions with respect to continuous, binary, ordered categorical, incidence rate and time-to-event outcome variables. Issues of aggregate and non-aggregate cluster trials, adjustment for variation in cluster size and the effect size are detailed. The problem of establishing the anticipated magnitude of between- and within-cluster variation to enable planning values of the intra-cluster correlation coefficient and the coefficient of variation are also described. Illustrative examples of calculations of trial sizes for each endpoint type are included. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Submicrometer Particle Sizing by Multiangle Light Scattering following Fractionation

    PubMed

    Wyatt

    1998-01-01

    The acid test for any particle sizing technique is its ability to determine the differential number fraction size distribution of a simple, well-defined sample. The very best characterized polystyrene latex sphere standards have been measured extensively using transmission electron microscope (TEM) images of a large subpopulation of such samples or by means of the electrostatic classification method as refined at the National Institute of Standards and Technology. The great success, in the past decade, of on-line multiangle light scattering (MALS) detection combined with size exclusion chromatography for the measurement of polymer mass and size distributions suggested, in the early 1990s, that a similar attack for particle characterization might prove useful as well. At that time, fractionation of particles was achievable by capillary hydrodynamic chromatography (CHDF) and field flow fractionation (FFF) methods. The latter has proven most useful when combined with MALS to provide accurate differential number fraction size distributions for a broad range of particle classes. The MALS/FFF combination provides unique advantages and precision relative to FFF, photon correlation spectroscopy, and CHDF techniques used alone. For many classes of particles, resolution of the MALS/FFF combination far exceeds that of TEM measurements. Copyright 1998 Academic Press. Copyright 1998Academic Press

  3. Further studies on the problems of geomagnetic field intensity determination from archaeological baked clay materials

    NASA Astrophysics Data System (ADS)

    Kostadinova-Avramova, M.; Kovacheva, M.

    2015-10-01

    Archaeological baked clay remains provide valuable information about the geomagnetic field in historical past, but determination of the geomagnetic field characteristics, especially intensity, is often a difficult task. This study was undertaken to elucidate the reasons for unsuccessful intensity determination experiments obtained from two different Bulgarian archaeological sites (Nessebar - Early Byzantine period and Malenovo - Early Iron Age). With this aim, artificial clay samples were formed in the laboratory and investigated. The clay used for the artificial samples preparation differs according to its initial state. Nessebar clay was baked in the antiquity, but Malenovo clay was raw, taken from the clay deposit near the site. The obtained artificial samples were repeatedly heated eight times in known magnetic field to 700 °C. X-ray diffraction analyses and rock-magnetic experiments were performed to obtain information about the mineralogical content and magnetic properties of the initial and laboratory heated clays. Two different protocols were applied for the intensity determination-Coe version of Thellier and Thellier method and multispecimen parallel differential pTRM protocol. Various combinations of laboratory fields and mutual positions of the directions of laboratory field and carried thermoremanence were used in the performed Coe experiment. The obtained results indicate that the failure of this experiment is probably related to unfavourable grain sizes of the prevailing magnetic carriers combined with the chosen experimental conditions. The multispecimen parallel differential pTRM protocol in its original form gives excellent results for the artificial samples, but failed for the real samples (samples coming from previously studied kilns of Nessebar and Malenovo sites). Obviously the strong dependence of this method on the homogeneity of the used subsamples hinders its implementation in its original form for archaeomaterials. The latter are often heterogeneous due to variable heating conditions in the different parts of the archaeological structures. The study draws attention to the importance of multiple heating for the stabilization of grain size distribution in baked clay materials and the need of elucidation of this question.

  4. Estimating individual glomerular volume in the human kidney: clinical perspectives

    PubMed Central

    Puelles, Victor G.; Zimanyi, Monika A.; Samuel, Terence; Hughson, Michael D.; Douglas-Denton, Rebecca N.; Bertram, John F.

    2012-01-01

    Background. Measurement of individual glomerular volumes (IGV) has allowed the identification of drivers of glomerular hypertrophy in subjects without overt renal pathology. This study aims to highlight the relevance of IGV measurements with possible clinical implications and determine how many profiles must be measured in order to achieve stable size distribution estimates. Methods. We re-analysed 2250 IGV estimates obtained using the disector/Cavalieri method in 41 African and 34 Caucasian Americans. Pooled IGV analysis of mean and variance was conducted. Monte-Carlo (Jackknife) simulations determined the effect of the number of sampled glomeruli on mean IGV. Lin’s concordance coefficient (RC), coefficient of variation (CV) and coefficient of error (CE) measured reliability. Results. IGV mean and variance increased with overweight and hypertensive status. Superficial glomeruli were significantly smaller than juxtamedullary glomeruli in all subjects (P < 0.01), by race (P < 0.05) and in obese individuals (P < 0.01). Subjects with multiple chronic kidney disease (CKD) comorbidities showed significant increases in IGV mean and variability. Overall, mean IGV was particularly reliable with nine or more sampled glomeruli (RC > 0.95, <5% difference in CV and CE). These observations were not affected by a reduced sample size and did not disrupt the inverse linear correlation between mean IGV and estimated total glomerular number. Conclusions. Multiple comorbidities for CKD are associated with increased IGV mean and variance within subjects, including overweight, obesity and hypertension. Zonal selection and the number of sampled glomeruli do not represent drawbacks for future longitudinal biopsy-based studies of glomerular size and distribution. PMID:21984554

  5. Clinical decision making and the expected value of information.

    PubMed

    Willan, Andrew R

    2007-01-01

    The results of the HOPE study, a randomized clinical trial, provide strong evidence that 1) ramipril prevents the composite outcome of cardiovascular death, myocardial infarction or stroke in patients who are at high risk of a cardiovascular event and 2) ramipril is cost-effective at a threshold willingness-to-pay of $10,000 to prevent an event of the composite outcome. In this report the concept of the expected value of information is used to determine if the information provided by the HOPE study is sufficient for decision making in the US and Canada. and results Using the cost-effectiveness data from a clinical trial, or from a meta-analysis of several trials, one can determine, based on the number of future patients that would benefit from the health technology under investigation, the expected value of sample information (EVSI) of a future trial as a function of proposed sample size. If the EVSI exceeds the cost for any particular sample size then the current information is insufficient for decision making and a future trial is indicated. If, on the other hand, there is no sample size for which the EVSI exceeds the cost, then there is sufficient information for decision making and no future trial is required. Using the data from the HOPE study these concepts are applied for various assumptions regarding the fixed and variable cost of a future trial and the number of patients who would benefit from ramipril. Expected value of information methods provide a decision-analytic alternative to the standard likelihood methods for assessing the evidence provided by cost-effectiveness data from randomized clinical trials.

  6. Colloidal-facilitated transport of inorganic contaminants in ground water: part 1, sampling considerations

    USGS Publications Warehouse

    Puls, Robert W.; Eychaner, James H.; Powell, Robert M.

    1996-01-01

    Investigations at Pinal Creek, Arizona, evaluated routine sampling procedures for determination of aqueous inorganic geochemistry and assessment of contaminant transport by colloidal mobility. Sampling variables included pump type and flow rate, collection under air or nitrogen, and filter pore diameter. During well purging and sample collection, suspended particle size and number as well as dissolved oxygen, temperature, specific conductance, pH, and redox potential were monitored. Laboratory analyses of both unfiltered samples and the filtrates were performed by inductively coupled argon plasma, atomic absorption with graphite furnace, and ion chromatography. Scanning electron microscopy with Energy Dispersive X-ray was also used for analysis of filter particulates. Suspended particle counts consistently required approximately twice as long as the other field-monitored indicators to stabilize. High-flow-rate pumps entrained normally nonmobile particles. Difference in elemental concentrations using different filter-pore sizes were generally not large with only two wells having differences greater than 10 percent in most wells. Similar differences (>10%) were observed for some wells when samples were collected under nitrogen rather than in air. Fe2+/Fe3+ ratios for air-collected samples were smaller than for samples collected under a nitrogen atmosphere, reflecting sampling-induced oxidation.

  7. Centrifugal Pump Effect on Average Particle Diameter of Oil-Water Emulsion

    NASA Astrophysics Data System (ADS)

    Morozova, A.; Eskin, A.

    2017-11-01

    In this paper we review the process of oil-water emulsion particles fragmentation in a turbulent flow created by a centrifugal pump. We examined the influence of time necessary for oil-water emulsion preparation on the particle size of oil products and the dependence of a centrifugal pump emulsifying capacity on the initial emulsion dispersion. The investigated emulsion contained the brand fuel oil M-100 and tap water; it was sprayed with a nozzle in a gas-water flare. After preparation of the emulsion, the centrifugal pump was turned on and the emulsion samples were taken before and after the pump passing in 15, 30 and 45 minutes of spraying. To determine the effect the centrifugal pump has on the dispersion of the oil-water emulsion, the mean particle diameter of the emulsion particles was determined by the optical and microscopic method before and after the pump passing. A dispersion analysis of the particles contained in the emulsion was carried out by a laser diffraction analyzer. By analyzing the pictures of the emulsion samples, it was determined that after the centrifugal pump operation a particle size of oil products decreases. This result is also confirmed by the distribution of the obtained analyzer where the content of fine particles with a diameter less than 10 μm increased from 12% to 23%. In case of increasing emulsion preparation time, a particle size of petroleum products also decreases.

  8. Spatial scale and sampling resolution affect measures of gap disturbance in a lowland tropical forest: implications for understanding forest regeneration and carbon storage

    PubMed Central

    Lobo, Elena; Dalling, James W.

    2014-01-01

    Treefall gaps play an important role in tropical forest dynamics and in determining above-ground biomass (AGB). However, our understanding of gap disturbance regimes is largely based either on surveys of forest plots that are small relative to spatial variation in gap disturbance, or on satellite imagery, which cannot accurately detect small gaps. We used high-resolution light detection and ranging data from a 1500 ha forest in Panama to: (i) determine how gap disturbance parameters are influenced by study area size, and the criteria used to define gaps; and (ii) to evaluate how accurately previous ground-based canopy height sampling can determine the size and location of gaps. We found that plot-scale disturbance parameters frequently differed significantly from those measured at the landscape-level, and that canopy height thresholds used to define gaps strongly influenced the gap-size distribution, an important metric influencing AGB. Furthermore, simulated ground surveys of canopy height frequently misrepresented the true location of gaps, which may affect conclusions about how relatively small canopy gaps affect successional processes and contribute to the maintenance of diversity. Across site comparisons need to consider how gap definition, scale and spatial resolution affect characterizations of gap disturbance, and its inferred importance for carbon storage and community composition. PMID:24452032

  9. A fiber optic sensor for noncontact measurement of shaft speed, torque, and power

    NASA Technical Reports Server (NTRS)

    Madzsar, George C.

    1990-01-01

    A fiber optic sensor which enables noncontact measurement of the speed, torque and power of a rotating shaft was fabricated and tested. The sensor provides a direct measurement of shaft rotational speed and shaft angular twist, from which torque and power can be determined. Angles of twist between 0.005 and 10 degrees were measured. Sensor resolution is limited by the sampling rate of the analog to digital converter, while accuracy is dependent on the spot size of the focused beam on the shaft. Increasing the sampling rate improves measurement resolution, and decreasing the focused spot size increases accuracy. Digital processing allows for enhancement of an electronically or optically degraded signal.

  10. A fiber optic sensor for noncontact measurement of shaft speed, torque and power

    NASA Technical Reports Server (NTRS)

    Madzsar, George C.

    1990-01-01

    A fiber optic sensor which enables noncontact measurement of the speed, torque and power of a rotating shaft was fabricated and tested. The sensor provides a direct measurement of shaft rotational speed and shaft angular twist, from which torque and power can be determined. Angles of twist between 0.005 and 10 degrees were measured. Sensor resolution is limited by the sampling rate of the analog to digital converter, while accuracy is dependent on the spot size of the focused beam on the shaft. Increasing the sampling rate improves measurement resolution, and decreasing the focused spot size increases accuracy. Digital processing allows for enhancement of an electronically or optically degraded signal.

  11. Annual cycle of size-resolved organic aerosol characterization in an urbanized desert environment

    NASA Astrophysics Data System (ADS)

    Cahill, Thomas M.

    2013-06-01

    Studies of size-resolved organic speciation of aerosols are still relatively rare and are generally only conducted over short durations. However, size-resolved organic data can both suggest possible sources of the aerosols and identify the human exposure to the chemicals since different aerosol sizes have different lung capture efficiencies. The objective of this study was to conduct size-resolved organic aerosol speciation for a calendar year in Phoenix, Arizona to determine the seasonal variations in both chemical concentrations and size profiles. The results showed large seasonal differences in combustion pollutants where the highest concentrations were observed in winter. Summertime aerosols have a greater proportion of biological compounds (e.g. sugars and fatty acids) and the biological compounds represent the largest fraction of the organic compounds detected. These results suggest that standard organic carbon (OC) measurements might be heavily influenced by primary biological compounds particularly if the samples are PM10 and TSP samples. Several large dust storms did not significantly alter the organic aerosol profile since Phoenix resides in a dusty desert environment, so the soil and plant tracer of trehalose was almost always present. The aerosol size profiles showed that PAHs were generally most abundant in the smallest aerosol size fractions, which are most likely to be captured by the lung, while the biological compounds were almost exclusively found in the coarse size fraction.

  12. A two-stage Monte Carlo approach to the expression of uncertainty with finite sample sizes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowder, Stephen Vernon; Moyer, Robert D.

    2005-05-01

    Proposed supplement I to the GUM outlines a 'propagation of distributions' approach to deriving the distribution of a measurand for any non-linear function and for any set of random inputs. The supplement's proposed Monte Carlo approach assumes that the distributions of the random inputs are known exactly. This implies that the sample sizes are effectively infinite. In this case, the mean of the measurand can be determined precisely using a large number of Monte Carlo simulations. In practice, however, the distributions of the inputs will rarely be known exactly, but must be estimated using possibly small samples. If these approximatedmore » distributions are treated as exact, the uncertainty in estimating the mean is not properly taken into account. In this paper, we propose a two-stage Monte Carlo procedure that explicitly takes into account the finite sample sizes used to estimate parameters of the input distributions. We will illustrate the approach with a case study involving the efficiency of a thermistor mount power sensor. The performance of the proposed approach will be compared to the standard GUM approach for finite samples using simple non-linear measurement equations. We will investigate performance in terms of coverage probabilities of derived confidence intervals.« less

  13. Characterizing string-of-pearls colloidal silica by multidetector hydrodynamic chromatography and comparison to multidetector size-exclusion chromatography, off-line multiangle static light scattering, and transmission electron microscopy.

    PubMed

    Brewer, Amandaa K; Striegel, André M

    2011-04-15

    The string-of-pearls-type morphology is ubiquitous, manifesting itself variously in proteins, vesicles, bacteria, synthetic polymers, and biopolymers. Characterizing the size and shape of analytes with such morphology, however, presents a challenge, due chiefly to the ease with which the "strings" can be broken during chromatographic analysis or to the paucity of information obtained from the benchmark microscopy and off-line light scattering methods. Here, we address this challenge with multidetector hydrodynamic chromatography (HDC), which has the ability to determine, simultaneously, the size, shape, and compactness and their distributions of string-of-pearls samples. We present the quadruple-detector HDC analysis of colloidal string-of-pearls silica, employing static multiangle and quasielastic light scattering, differential viscometry, and differential refractometry as detection methods. The multidetector approach shows a sample that is broadly polydisperse in both molar mass and size, with strings ranging from two to five particles, but which also contains a high concentration of single, unattached "pearls". Synergistic combination of the various size parameters obtained from the multiplicity of detectors employed shows that the strings with higher degrees of polymerization have a shape similar to the theory-predicted shape of a Gaussian random coil chain of nonoverlapping beads, while the strings with lower degrees of polymerization have a prolate ellipsoidal shape. The HDC technique is contrasted experimentally with multidetector size-exclusion chromatography, where, even under extremely gentle conditions, the strings still degraded during analysis. Such degradation is shown to be absent in HDC, as evidenced by the fact that the molar mass and radius of gyration obtained by HDC with multiangle static light scattering detection (HDC/MALS) compare quite favorably to those determined by off-line MALS analysis under otherwise identical conditions. The multidetector HDC results were also comparable to those obtained by transmission electron microscopy (TEM). Unlike off-line MALS or TEM, however, multidetector HDC is able to provide complete particle analysis based on the molar mass, size, shape, and compactness and their distributions for the entire sample population in less than 20 min. © 2011 American Chemical Society

  14. Studies in Support of the Application of Statistical Theory to Design and Evaluation of Operational Tests. Annex D. An Application of Bayesian Statistical Methods in the Determination of Sample Size for Operational Testing in the U.S. Army

    DTIC Science & Technology

    1977-07-01

    SIZE C XNI. C UE2 - UTILITY OF EXPERIMENT OF SIZE C XN2. C ICHECK - VARIABLE USLD TO CHECK FOR C TERMINATION, C~C DIMENSION SUBLIM{20),UPLIM(20),UEI(20...1J=UPLIM(K4-I)-XNI (K+1)+SU8LIt1(K+i*. C CHECK FOR TERMINATION. 944 ICHECK =SUBLIM(K)+2 IFIICHECK.GEUPLiHMK.,OR.K.G1.20’ GO TO 930 GO TO 920 930

  15. Stratum variance estimation for sample allocation in crop surveys. [Great Plains Corridor

    NASA Technical Reports Server (NTRS)

    Perry, C. R., Jr.; Chhikara, R. S. (Principal Investigator)

    1980-01-01

    The problem of determining stratum variances needed in achieving an optimum sample allocation for crop surveys by remote sensing is investigated by considering an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical crop statistics is developed for obtaining initial estimates of tratum variances. The procedure is applied to estimate stratum variances for wheat in the U.S. Great Plains and is evaluated based on the numerical results thus obtained. It is shown that the proposed technique is viable and performs satisfactorily, with the use of a conservative value for the field size and the crop statistics from the small political subdivision level, when the estimated stratum variances were compared to those obtained using the LANDSAT data.

  16. The Epiregolith

    NASA Technical Reports Server (NTRS)

    Mendell, Wendell W.; Noble, S. K.

    2010-01-01

    The physical properties of the lunar regolith were originally inferred from remotely sensed data, first from the Earth and later from orbiting spacecraft. The Surveyor landings and the Apollo surface explorations produced a more concrete characterization of the macroscopic properties. In general, the upper regolith consists of a loosely consolidated layer centimeters thick underlain by a particulate but extremely compacted layer to depths of meters or tens of meters. The median particle size as determined by mechanical sieving in terrestrial laboratories is several tens of micrometers. However, the comminuting processes that form the layer produce particles in all sizes down to manometers. The smallest particles, having a high surface to volume ratio, tend to be electrostatically bound to larger particles and are quite difficult to separate mechanically in the laboratory. Particle size distributions determined from lunar soil samples often group particles smaller than 10 micrometers.

  17. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    PubMed

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  18. [Determination of the distribution of relative molecular mass of organic matter by high pressure size exclusion chromatography with UV and TOC detectors].

    PubMed

    Zhang, Han; Dong, Bing-Zhi

    2012-09-01

    An on-line high pressure size exclusion chromatography (HPSEC) with UV and TOC detectors was adapted to examine the distribution of relative molecular mass of natural organic matter (NOM). Through synchronous determination of UV254 and TOC responses in a wide range of relative molecular mass, it was possible to accurately characterize the structure of NOM, especially for some non-aromatic and non-conjugated double bond organics which have low response to UV. It was found that, TOC detector was capable of detecting all kinds of organic matters, including sucrose, sodium alginate and other hydrophilic organic compounds. The sample volume had a positively linear correlation with the TOC response, indicating that the larger volume would produce stronger responses. The effect of ion strength was relatively low, shown by the small decrease of peak area (1.2% ) from none to 0.2 mol x L(-1) NaCl. The pH value of tested samples should be adjusted to neutral or acidic because when the samples were alkaline, the results might be inaccurate. Compared to the sample solvents adopted as ultrapure water, the samples prepared by mobile phase solvents had less interference to salt boundary peak. The on-line HPSEC-UV-TOC can be used accurately to characterize the distribution of relative molecular mass and its four fractions in River Xiang.

  19. Determining quantity and quality of retained oil in mature marly chalk and marlstone of the Cretaceous Niobrara Formation by low-temperature hydrous pyrolysis

    USGS Publications Warehouse

    Lewan, Michael; Sonnenfeld, Mark D.

    2017-01-01

    Low-temperature hydrous pyrolysis (LTHP) at 300°C (572°F) for 24 h released retained oils from 12- to 20-meshsize samples of mature Niobrara marly chalk and marlstone cores. The released oil accumulated on the water surface of the reactor, and is compositionally similar to oil produced from the same well. The quantities of oil released from the marly chalk and marlstone by LTHP are respectively 3.4 and 1.6 times greater than those determined by tight rock analyses (TRA) on aliquots of the same samples. Gas chromatograms indicated this difference is a result of TRA oils losing more volatiles and volatilizing less heavy hydrocarbons during collection than LTHP oils. Characterization of the rocks before and after LTPH by programmable open-system pyrolysis (HAWK) indicate that under LTHP conditions no significant oil is generated and only preexisting retained oil is released. Although LTHP appears to provide better predictions of quantity and quality of retained oil in a mature source rock, it is not expected to replace the more time and sample-size efficacy of TRA. However, LTHP can be applied to composited samples from key intervals or lithologies originally recognized by TRA. Additional studies on duration, temperature, and sample size used in LTHP may further optimize its utility.

  20. Adequacy of laser diffraction for soil particle size analysis

    PubMed Central

    Fisher, Peter; Aumann, Colin; Chia, Kohleth; O'Halloran, Nick; Chandra, Subhash

    2017-01-01

    Sedimentation has been a standard methodology for particle size analysis since the early 1900s. In recent years laser diffraction is beginning to replace sedimentation as the prefered technique in some industries, such as marine sediment analysis. However, for the particle size analysis of soils, which have a diverse range of both particle size and shape, laser diffraction still requires evaluation of its reliability. In this study, the sedimentation based sieve plummet balance method and the laser diffraction method were used to measure the particle size distribution of 22 soil samples representing four contrasting Australian Soil Orders. Initially, a precise wet riffling methodology was developed capable of obtaining representative samples within the recommended obscuration range for laser diffraction. It was found that repeatable results were obtained even if measurements were made at the extreme ends of the manufacturer’s recommended obscuration range. Results from statistical analysis suggested that the use of sample pretreatment to remove soil organic carbon (and possible traces of calcium-carbonate content) made minor differences to the laser diffraction particle size distributions compared to no pretreatment. These differences were found to be marginally statistically significant in the Podosol topsoil and Vertosol subsoil. There are well known reasons why sedimentation methods may be considered to ‘overestimate’ plate-like clay particles, while laser diffraction will ‘underestimate’ the proportion of clay particles. In this study we used Lin’s concordance correlation coefficient to determine the equivalence of laser diffraction and sieve plummet balance results. The results suggested that the laser diffraction equivalent thresholds corresponding to the sieve plummet balance cumulative particle sizes of < 2 μm, < 20 μm, and < 200 μm, were < 9 μm, < 26 μm, < 275 μm respectively. The many advantages of laser diffraction for soil particle size analysis, and the empirical results of this study, suggest that deployment of laser diffraction as a standard test procedure can provide reliable results, provided consistent sample preparation is used. PMID:28472043

  1. Size distribution and sources of humic-like substances in particulate matter at an urban site during winter.

    PubMed

    Park, Seungshik; Son, Se-Chang

    2016-01-01

    This study investigates the size distribution and possible sources of humic-like substances (HULIS) in ambient aerosol particles collected at an urban site in Gwangju, Korea during the winter of 2015. A total of 10 sets of size-segregated aerosol samples were collected using a 10-stage Micro-Orifice Uniform Deposit Impactor (MOUDI), and the samples were analyzed to determine the mass as well as the presence of ionic species (Na(+), NH4(+), K(+), Ca(2+), Mg(2+), Cl(-), NO3(-), and SO4(2-)), water-soluble organic carbon (WSOC) and HULIS. The separation and quantification of the size-resolved HULIS components from the MOUDI samples was accomplished using a Hydrophilic-Lipophilic Balanced (HLB) solid phase extraction method and a total organic carbon analyzer, respectively. The entire sampling period was divided into two periods: non-Asian dust (NAD) and Asian dust (AD) periods. The contributions of water-soluble organic mass (WSOM = 1.9 × WSOC) and HULIS (=1.9 × HULIS-C) to fine particles (PM1.8) were approximately two times higher in the NAD samples (23.2 and 8.0%) than in the AD samples (12.8 and 4.2%). However, the HULIS-C/WSOC ratio in PM1.8 showed little difference between the NAD (0.35 ± 0.07) and AD (0.35 ± 0.05) samples. The HULIS exhibited a uni-modal size distribution (@0.55 μm) during NAD and a bimodal distribution (@0.32 and 1.8 μm) during AD, which was quite similar to the mass size distributions of particulate matter, WSOC, NO3(-), SO4(2-), and NH4(+) in both the NAD and AD samples. The size distribution characteristics and the results of the correlation analyses indicate that the sources of HULIS varied according to the particle size. In the fine mode (≤1.8 μm), the HULIS composition during the NAD period was strongly associated with secondary organic aerosol (SOA) formation processes similar to those of secondary ionic species (cloud processing and/or heterogeneous reactions) and primary emissions during the biomass burning period, and during the AD period, it was only associated with SOA formation. In the coarse mode (3.1-10 μm), it was difficult to identify the HULIS sources during the NAD period, and during the AD period, the HULIS was most likely associated with soil-related particles [Ca(NO3]2 and CaSO4) and/or sea-salt particles (NaNO3 and Na2SO4).

  2. DNA pooling strategies for categorical (ordinal) traits

    USDA-ARS?s Scientific Manuscript database

    Despite reduced genotyping costs in recent years, obtaining genotypes for all individuals in a population may still not be feasible when sample size is large. DNA pooling provides a useful alternative to determining genotype effects. Clustering algorithms allow for grouping of individuals (observati...

  3. Particle Size Distribution of Serratia marcescens Aerosols Created During Common Laboratory Procedures and Simulated Laboratory Accidents

    PubMed Central

    Kenny, Michael T.; Sabel, Fred L.

    1968-01-01

    Andersen air samplers were used to determine the particle size distribution of Serratia marcescens aerosols created during several common laboratory procedures and simulated laboratory accidents. Over 1,600 viable particles per cubic foot of air sampled were aerosolized during blending operations. More than 98% of these particles were less than 5 μ in size. In contrast, 80% of the viable particles aerosolized by handling lyophilized cultures were larger than 5 μ. Harvesting infected eggs, sonic treatment, centrifugation, mixing cultures, and dropping infectious material produced aerosols composed primarily of particles in the 1.0- to 7.5-μ size range. Images Fig. 1 PMID:4877498

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Under contract with the US Department of Energy (DE-AC22-92PCO0367), Pittsburgh Energy Technology Center, Radian Corporation has conducted a test program to collect and analyze size-fractionated stack gas particulate samples for selected inorganic hazardous air pollutants (HAPS). Specific goals of the program are (1) the collection of one-gram quantities of size-fractionated stack gas particulate matter for bulk (total) and surface chemical charactization, and (2) the determination of the relationship between particle size, bulk and surface (leachable) composition, and unit load. The information obtained from this program identifies the effects of unit load, particle size, and wet FGD system operation on themore » relative toxicological effects of exposure to particulate emissions.« less

  5. A Meta-Analysis of Mathematics and Working Memory: Moderating Effects of Working Memory Domain, Type of Mathematics Skill, and Sample Characteristics

    ERIC Educational Resources Information Center

    Peng, Peng; Namkung, Jessica; Barnes, Marcia; Sun, Congying

    2016-01-01

    The purpose of this meta-analysis was to determine the relation between mathematics and working memory (WM) and to identify possible moderators of this relation including domains of WM, types of mathematics skills, and sample type. A meta-analysis of 110 studies with 829 effect sizes found a significant medium correlation of mathematics and WM, r…

  6. The study of the effect of aluminum powders dispersion on the oxidation and kinetic characteristics

    NASA Astrophysics Data System (ADS)

    Gorbenko, T. I.; Gorbenko, M. V.; Orlova, M. P.; Volkov, S. A.

    2017-11-01

    Differential-scanning calorimetry (DSC) and thermogravimetric analysis (TG) were used to study micro-sized aluminum powder ASD-4 and nano-sized powder Alex. The dependence of the oxidation process on the dispersion of the sample particles is shown. The influence of thermogravimetric conditions on the thermal regime of the process was considered, and its kinetic parameters were determined. Calculations of the activation energy and the pre-exponential factor were carried out.

  7. Analysis of a MIL-L-27502 lubricant from a gas-turbine engine test by size-exclusion chromatography

    NASA Technical Reports Server (NTRS)

    Jones, W. R., Jr.; Morales, W.

    1983-01-01

    Size exclusion chromatography was used to determine the chemical degradation of MIL-L-27502 oil samples from a gas turbine engine test run at a bulk oil temperature of 216 C. Results revealed a progressive loss of primary ester and additive depletion and the formation of higher molecular weight products with time. The high molecular weight products absorbed strongly in the ultraviolet indicating the presence of chromophoric groups.

  8. The Effect of Time, Roasting Temperature, and Grind Size on Caffeine and Chlorogenic Acid Concentrations in Cold Brew Coffee.

    PubMed

    Fuller, Megan; Rao, Niny Z

    2017-12-21

    The extraction kinetics and equilibrium concentrations of caffeine and 3-chlorogenic acid (3-CGA) in cold brew coffee were investigated by brewing four coffee samples (dark roast/medium grind, dark roast/coarse grind, medium roast/medium grind, medium roast/coarse grind) using cold and hot methods. 3-CGA and caffeine were found at higher concentrations in cold brew coffee made with medium roast coffees, rather than dark roast. The grind size did not impact 3-CGA and caffeine concentrations of cold brew samples significantly, indicating that the rate determining step in extraction for these compounds did not depend on surface area. Caffeine concentrations in cold brew coarse grind samples were substantially higher than their hot brew counterparts. 3-CGA concentrations and pH were comparable between cold and hot brews. This work suggests that the difference in acidity of cold brew coffee is likely not due to 3-CGA or caffeine concentrations considering that most acids in coffee are highly soluble and extract quickly. It was determined that caffeine and 3-CGA concentrations reached equilibrium according to first order kinetics between 6 and 7 hours in all cold brew samples instead of 10 to 24 hours outlined in typical cold brew methods.

  9. Effect of heat treatment on the crystal structure of deformed samples of chromium-manganese steel

    NASA Astrophysics Data System (ADS)

    Chezganov, D. S.; Chikova, O. A.; Borovykh, M. A.

    2017-09-01

    Results of studying microstructures and the crystal structure of samples of 35KhGF steel (0.31-0.38 wt % C, 0.17-0.37 wt % Si, 0.95-1.25 wt % Mn, 1.0-1.3 wt % Cr, 0.06-0.12 wt % V, and the remainder was Fe) have been presented. The samples have been selected from hot-rolled pipes subjected to different heat treatments. A study has been carried out in order to explain the choice of the heat-treatment regime based on determining the structure-properties relationship that provides an increase in the corrosion resistance of pipes to the effect of hydrocarbons. Methods of the energy-dispersive X-ray spectroscopy (EDS) and electron backscatter diffraction (EBSD) have been used. In the microstructure of samples, oxide inclusions and discontinuities with sizes of 1-50 μm that presumably consist of the scale were detected. The ferrite grain size and the orientations of crystals were determined; the data on the local mechanical stresses in the Taylor orientation- factor maps were obtained. The grain refinement; the increase in the fraction of the low-angle boundaries; and the decrease in the local mechanical stresses and, therefore, the highest corrosion resistance to the effect of hydrocarbons is achieved by normalizing at 910°C.

  10. Knowledge and use of information and communication technology by health sciences students of the University of Ghana.

    PubMed

    Dery, Samuel; Vroom, Frances da-Costa; Godi, Anthony; Afagbedzi, Seth; Dwomoh, Duah

    2016-09-01

    Studies have shown that ICT adoption contributes to productivity and economic growth. It is therefore important that health workers have knowledge in ICT to ensure adoption and uptake of ICT tools to enable efficient health delivery. To determine the knowledge and use of ICT among students of the College of Health Sciences at the University of Ghana. This was a cross-sectional study conducted among students in all the five Schools of the College of Health Sciences at the University of Ghana. A total of 773 students were sampled from the Schools. Sampling proportionate to size was then used to determine the sample sizes required for each school, academic programme and level of programme. Simple random sampling was subsequently used to select students from each stratum. Computer knowledge was high among students at almost 99%. About 83% owned computers (p < 0.001) and self-rated computer knowledge was also 87 % (p <0.001). Usage was mostly for studying at 93% (p< 0.001). This study shows students have adequate knowledge and use of computers. It brings about an opportunity to introduce ICT in healthcare delivery to them. This will ensure their adequate preparedness to embrace new ways of delivering care to improve service delivery. Africa Build Project, Grant Number: FP7-266474.

  11. Analytical approaches for the characterization and quantification of nanoparticles in food and beverages.

    PubMed

    Mattarozzi, Monica; Suman, Michele; Cascio, Claudia; Calestani, Davide; Weigel, Stefan; Undas, Anna; Peters, Ruud

    2017-01-01

    Estimating consumer exposure to nanomaterials (NMs) in food products and predicting their toxicological properties are necessary steps in the assessment of the risks of this technology. To this end, analytical methods have to be available to detect, characterize and quantify NMs in food and materials related to food, e.g. food packaging and biological samples following metabolization of food. The challenge for the analytical sciences is that the characterization of NMs requires chemical as well as physical information. This article offers a comprehensive analysis of methods available for the detection and characterization of NMs in food and related products. Special attention was paid to the crucial role of sample preparation methods since these have been partially neglected in the scientific literature so far. The currently available instrumental methods are grouped as fractionation, counting and ensemble methods, and their advantages and limitations are discussed. We conclude that much progress has been made over the last 5 years but that many challenges still exist. Future perspectives and priority research needs are pointed out. Graphical Abstract Two possible analytical strategies for the sizing and quantification of Nanoparticles: Asymmetric Flow Field-Flow Fractionation with multiple detectors (allows the determination of true size and mass-based particle size distribution); Single Particle Inductively Coupled Plasma Mass Spectrometry (allows the determination of a spherical equivalent diameter of the particle and a number-based particle size distribution).

  12. Size characterization by Sedimentation Field Flow Fractionation of silica particles used as food additives.

    PubMed

    Contado, Catia; Ravani, Laura; Passarella, Martina

    2013-07-25

    Four types of SiO2, available on the market as additives in food and personal care products, were size characterized using Sedimentation Field Flow Fractionation (SdFFF), SEM, TEM and Photon Correlation Spectroscopy (PCS). The synergic use of the different analytical techniques made it possible, for some samples, to confirm the presence of primary nanoparticles (10 nm) organized in clusters or aggregates of different dimension and, for others, to discover that the available information is incomplete, particularly that regarding the presence of small particles. A protocol to extract the silica particles from a simple food matrix was set up, enriching (0.25%, w w(-1)) a nearly silica-free instant barley coffee powder with a known SiO2 sample. The SdFFF technique, in conjunction with SEM observations, made it possible to identify the added SiO2 particles and verify the new particle size distribution. The SiO2 content of different powdered foodstuffs was determined by graphite furnace atomic absorption spectroscopy (GFAAS); the concentrations ranged between 0.006 and 0.35% (w w(-1)). The protocol to isolate the silica particles was so applied to the most SiO2-rich commercial products and the derived suspensions were separated by SdFFF; SEM and TEM observations supported the size analyses while GFAAS determinations on collected fractions permitted element identification. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. PSP toxin levels and plankton community composition and abundance in size-fractionated vertical profiles during spring/summer blooms of the toxic dinoflagellate Alexandrium fundyense in the Gulf of Maine and on Georges Bank, 2007, 2008, and 2010: 2. Plankton community composition and abundance.

    PubMed

    Petitpas, Christian M; Turner, Jefferson T; Deeds, Jonathan R; Keafer, Bruce A; McGillicuddy, Dennis J; Milligan, Peter J; Shue, Vangie; White, Kevin D; Anderson, Donald M

    2014-05-01

    As part of the Gulf of Maine Toxicity (GOMTOX) project, we determined Alexandrium fundyense abundance, paralytic shellfish poisoning (PSP) toxin levels in various plankton size fractions, and the community composition of potential grazers of A. fundyense in plankton size fractions during blooms of this toxic dinoflagellate in the coastal Gulf of Maine and on Georges Bank in spring and summer of 2007, 2008, and 2010. PSP toxins and A. fundyense cells were found throughout the sampled water column (down to 50 m) in the 20-64 μm size fractions. While PSP toxins were widespread throughout all size classes of the zooplankton grazing community, the majority of the toxin was measured in the 20-64 μm size fraction. A. fundyense cellular toxin content estimated from field samples was significantly higher in the coastal Gulf of Maine than on Georges Bank. Most samples containing PSP toxins in the present study had diverse assemblages of grazers. However, some samples clearly suggested PSP toxin accumulation in several different grazer taxa including tintinnids, heterotrophic dinoflagellates of the genus Protoperidinium , barnacle nauplii, the harpacticoid copepod Microsetella norvegica , the calanoid copepods Calanus finmarchicus and Pseudocalanus spp., the marine cladoceran Evadne nordmanni , and hydroids of the genus Clytia . Thus, a diverse assemblage of zooplankton grazers accumulated PSP toxins through food-web interactions. This raises the question of whether PSP toxins pose a potential human health risk not only from nearshore bivalve shellfish, but also potentially from fish and other upper-level consumers in zooplankton-based pelagic food webs.

  14. The effectiveness of increased apical enlargement in reducing intracanal bacteria.

    PubMed

    Card, Steven J; Sigurdsson, Asgeir; Orstavik, Dag; Trope, Martin

    2002-11-01

    It has been suggested that the apical portion of a root canal is not adequately disinfected by typical instrumentation regimens. The purpose of this study was to determine whether instrumentation to sizes larger than typically used would more effectively remove culturable bacteria from the canal. Forty patients with clinical and radiographic evidence of apical periodontitis were recruited from the endodontic clinic. Mandibular cuspids (n = 2), bicuspids (n = 11), and molars (mesial roots) (n = 27) were selected for the study. Bacterial sampling was performed upon access and after each of two consecutive instrumentations. The first instrumentation utilized 1% NaOCI and 0.04 taper ProFile rotary files. The cuspid and bicuspid canals were instrumented to a #8 size and the molar canals to a #7 size. The second instrumentation utilized LightSpeed files and 1% NaOCl irrigation for further enlargement of the apical third. Typically, molars were instrumented to size 60 and cuspid/bicuspid canals to size 80. Our findings show that 100% of the cuspid/bicuspid canals and 81.5% of the molar canals were rendered bacteria-free after the first instrumentation sizes. The molar results improved to 89% after the second instrumentation. Of the (59.3%) molar mesial canals without a clinically detectable communication, 93% were rendered bacteria-free with the first instrumentation. Using a Wilcoxon rank sum test, statistically significant differences (p < 0.0001) were found between the initial sample and the samples after the first and second instrumentations. The differences between the samples that followed the two instrumentation regimens were not significant (p = 0.0617). It is concluded that simple root canal systems (without multiple canal communications) may be rendered bacteria-free when preparation of this type is utilized.

  15. From the field: Efficacy of detecting Chronic Wasting Disease via sampling hunter-killed white-tailed deer

    USGS Publications Warehouse

    Diefenbach, D.R.; Rosenberry, C.S.; Boyd, Robert C.

    2004-01-01

    Surveillance programs for Chronic Wasting Disease (CWD) in free-ranging cervids often use a standard of being able to detect 1% prevalence when determining minimum sample sizes. However, 1% prevalence may represent >10,000 infected animals in a population of 1 million, and most wildlife managers would prefer to detect the presence of CWD when far fewer infected animals exist. We wanted to detect the presence of CWD in white-tailed deer (Odocoileus virginianus) in Pennsylvania when the disease was present in only 1 of 21 wildlife management units (WMUs) statewide. We used computer simulation to estimate the probability of detecting CWD based on a sampling design to detect the presence of CWD at 0.1% and 1.0% prevalence (23-76 and 225-762 infected deer, respectively) using tissue samples collected from hunter-killed deer. The probability of detection at 0.1% prevalence was <30% with sample sizes of ???6,000 deer, and the probability of detection at 1.0% prevalence was 46-72% with statewide sample sizes of 2,000-6,000 deer. We believe that testing of hunter-killed deer is an essential part of any surveillance program for CWD, but our results demonstrated the importance of a multifaceted surveillance approach for CWD detection rather than sole reliance on testing hunter-killed deer.

  16. Effect of synthesis methods with different annealing temperatures on micro structure, cations distribution and magnetic properties of nano-nickel ferrite

    NASA Astrophysics Data System (ADS)

    El-Sayed, Karimat; Mohamed, Mohamed Bakr; Hamdy, Sh.; Ata-Allah, S. S.

    2017-02-01

    Nano-crystalline NiFe2O4 was synthesized by citrate and sol-gel methods at different annealing temperatures and the results were compared with a bulk sample prepared by ceramic method. The effect of methods of preparation and different annealing temperatures on the crystallize size, strain, bond lengths, bond angles, cations distribution and degree of inversions were investigated by X-ray powder diffraction, high resolution transmission electron microscope, Mössbauer effect spectrometer and vibrating sample magnetometer. The cations distributions were determined at both octahedral and tetrahedral sites using both Mössbauer effect spectroscopy and a modified Bertaut method using Rietveld method. The Mössbauer effect spectra showed a regular decrease in the hyperfine field with decreasing particle size. Saturation magnetization and coercivity are found to be affected by the particle size and the cations distribution.

  17. Molecular-Size-Separated Brown Carbon Absorption for Biomass-Burning Aerosol at Multiple Field Sites.

    PubMed

    Di Lorenzo, Robert A; Washenfelder, Rebecca A; Attwood, Alexis R; Guo, Hongyu; Xu, Lu; Ng, Nga L; Weber, Rodney J; Baumann, Karsten; Edgerton, Eric; Young, Cora J

    2017-03-21

    Biomass burning is a known source of brown carbon aerosol in the atmosphere. We collected filter samples of biomass-burning emissions at three locations in Canada and the United States with transport times of 10 h to >3 days. We analyzed the samples with size-exclusion chromatography coupled to molecular absorbance spectroscopy to determine absorbance as a function of molecular size. The majority of absorption was due to molecules >500 Da, and these contributed an increasing fraction of absorption as the biomass-burning aerosol aged. This suggests that the smallest molecular weight fraction is more susceptible to processes that lead to reduced light absorption, while larger-molecular-weight species may represent recalcitrant brown carbon. We calculate that these large-molecular-weight species are composed of more than 20 carbons with as few as two oxygens and would be classified as extremely low volatility organic compounds (ELVOCs).

  18. Isothermal titration calorimetry in nanoliter droplets with subsecond time constants.

    PubMed

    Lubbers, Brad; Baudenbacher, Franz

    2011-10-15

    We reduced the reaction volume in microfabricated suspended-membrane titration calorimeters to nanoliter droplets and improved the sensitivities to below a nanowatt with time constants of around 100 ms. The device performance was characterized using exothermic acid-base neutralizations and a detailed numerical model. The finite element based numerical model allowed us to determine the sensitivities within 1% and the temporal dynamics of the temperature rise in neutralization reactions as a function of droplet size. The model was used to determine the optimum calorimeter design (membrane size and thickness, junction area, and thermopile thickness) and sensitivities for sample volumes of 1 nL for silicon nitride and polymer membranes. We obtained a maximum sensitivity of 153 pW/(Hz)(1/2) for a 1 μm SiN membrane and 79 pW/(Hz)(1/2) for a 1 μm polymer membrane. The time constant of the calorimeter system was determined experimentally using a pulsed laser to increase the temperature of nanoliter sample volumes. For a 2.5 nanoliter sample volume, we experimentally determined a noise equivalent power of 500 pW/(Hz)(1/2) and a 1/e time constant of 110 ms for a modified commercially available infrared sensor with a thin-film thermopile. Furthermore, we demonstrated detection of 1.4 nJ reaction energies from injection of 25 pL of 1 mM HCl into a 2.5 nL droplet of 1 mM NaOH. © 2011 American Chemical Society

  19. Improving tritium exposure reconstructions using accelerator mass spectrometry

    PubMed Central

    Hunt, J. R.; Vogel, J. S.; Knezovich, J. P.

    2010-01-01

    Direct measurement of tritium atoms by accelerator mass spectrometry (AMS) enables rapid low-activity tritium measurements from milligram-sized samples and permits greater ease of sample collection, faster throughput, and increased spatial and/or temporal resolution. Because existing methodologies for quantifying tritium have some significant limitations, the development of tritium AMS has allowed improvements in reconstructing tritium exposure concentrations from environmental measurements and provides an important additional tool in assessing the temporal and spatial distribution of chronic exposure. Tritium exposure reconstructions using AMS were previously demonstrated for a tree growing on known levels of tritiated water and for trees exposed to atmospheric releases of tritiated water vapor. In these analyses, tritium levels were measured from milligram-sized samples with sample preparation times of a few days. Hundreds of samples were analyzed within a few months of sample collection and resulted in the reconstruction of spatial and temporal exposure from tritium releases. Although the current quantification limit of tritium AMS is not adequate to determine natural environmental variations in tritium concentrations, it is expected to be sufficient for studies assessing possible health effects from chronic environmental tritium exposure. PMID:14735274

  20. Isotopic signatures: An important tool in today`s world

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rokop, D.J.; Efurd, D.W.; Benjamin, T.M.

    1995-12-01

    High-sensitivity/high-accuracy actinide measurement techniques developed to support weapons diagnostic capabilities at the Los Alamos National Laboratory are now being used for environmental monitoring. The measurement techniques used are Thermal Ionization Mass Spectrometry (TIMS), Alpha Spectrometry(AS), and High Resolution Gamma Spectrometry(HRGS). These techniques are used to address a wide variety of actinide inventory issues: Environmental surveillance, site characterizations, food chain member determination, sedimentary records of activities, and treaty compliance concerns. As little as 10 femtograms of plutonium can be detected in samples and isotopic signatures determined on samples containing sub-100 femtogram amounts. Uranium, present in all environmental samples, can generally yieldmore » isotopic signatures of anthropogenic origin when present at the 40 picogam/gram level. Solid samples (soils, sediments, fauna, and tissue) can range from a few particles to several kilograms in size. Water samples can range from a few milliliters to as much as 200 liters.« less

Top