Sample records for small sample sizes

  1. Estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean.

    PubMed

    Schillaci, Michael A; Schillaci, Mario E

    2009-02-01

    The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (n<10) or very small (n < or = 5) sample sizes. This method can be used by researchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.

  2. Using the Student's "t"-Test with Extremely Small Sample Sizes

    ERIC Educational Resources Information Center

    de Winter, J. C .F.

    2013-01-01

    Researchers occasionally have to work with an extremely small sample size, defined herein as "N" less than or equal to 5. Some methodologists have cautioned against using the "t"-test when the sample size is extremely small, whereas others have suggested that using the "t"-test is feasible in such a case. The present…

  3. Standard Deviation for Small Samples

    ERIC Educational Resources Information Center

    Joarder, Anwar H.; Latif, Raja M.

    2006-01-01

    Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…

  4. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    PubMed

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  5. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    PubMed

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  6. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    PubMed

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Tests of Independence in Contingency Tables with Small Samples: A Comparison of Statistical Power.

    ERIC Educational Resources Information Center

    Parshall, Cynthia G.; Kromrey, Jeffrey D.

    1996-01-01

    Power and Type I error rates were estimated for contingency tables with small sample sizes for the following four types of tests: (1) Pearson's chi-square; (2) chi-square with Yates's continuity correction; (3) the likelihood ratio test; and (4) Fisher's Exact Test. Various marginal distributions, sample sizes, and effect sizes were examined. (SLD)

  8. Statistical Analysis Techniques for Small Sample Sizes

    NASA Technical Reports Server (NTRS)

    Navard, S. E.

    1984-01-01

    The small sample sizes problem which is encountered when dealing with analysis of space-flight data is examined. Because of such a amount of data available, careful analyses are essential to extract the maximum amount of information with acceptable accuracy. Statistical analysis of small samples is described. The background material necessary for understanding statistical hypothesis testing is outlined and the various tests which can be done on small samples are explained. Emphasis is on the underlying assumptions of each test and on considerations needed to choose the most appropriate test for a given type of analysis.

  9. Breaking Free of Sample Size Dogma to Perform Innovative Translational Research

    PubMed Central

    Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.

    2011-01-01

    Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197

  10. An internal pilot design for prospective cancer screening trials with unknown disease prevalence.

    PubMed

    Brinton, John T; Ringham, Brandy M; Glueck, Deborah H

    2015-10-13

    For studies that compare the diagnostic accuracy of two screening tests, the sample size depends on the prevalence of disease in the study population, and on the variance of the outcome. Both parameters may be unknown during the design stage, which makes finding an accurate sample size difficult. To solve this problem, we propose adapting an internal pilot design. In this adapted design, researchers will accrue some percentage of the planned sample size, then estimate both the disease prevalence and the variances of the screening tests. The updated estimates of the disease prevalence and variance are used to conduct a more accurate power and sample size calculation. We demonstrate that in large samples, the adapted internal pilot design produces no Type I inflation. For small samples (N less than 50), we introduce a novel adjustment of the critical value to control the Type I error rate. We apply the method to two proposed prospective cancer screening studies: 1) a small oral cancer screening study in individuals with Fanconi anemia and 2) a large oral cancer screening trial. Conducting an internal pilot study without adjusting the critical value can cause Type I error rate inflation in small samples, but not in large samples. An internal pilot approach usually achieves goal power and, for most studies with sample size greater than 50, requires no Type I error correction. Further, we have provided a flexible and accurate approach to bound Type I error below a goal level for studies with small sample size.

  11. Sample sizes and model comparison metrics for species distribution models

    Treesearch

    B.B. Hanberry; H.S. He; D.C. Dey

    2012-01-01

    Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....

  12. Study samples are too small to produce sufficiently precise reliability coefficients.

    PubMed

    Charter, Richard A

    2003-04-01

    In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.

  13. The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education

    ERIC Educational Resources Information Center

    Slavin, Robert; Smith, Dewi

    2009-01-01

    Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…

  14. Parameter Estimation with Small Sample Size: A Higher-Order IRT Model Approach

    ERIC Educational Resources Information Center

    de la Torre, Jimmy; Hong, Yuan

    2010-01-01

    Sample size ranks as one of the most important factors that affect the item calibration task. However, due to practical concerns (e.g., item exposure) items are typically calibrated with much smaller samples than what is desired. To address the need for a more flexible framework that can be used in small sample item calibration, this article…

  15. 77 FR 2697 - Proposed Information Collection; Comment Request; Annual Services Report

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-19

    ... and from a sample of small- and medium-sized businesses selected using a stratified sampling procedure... be canvassed when the sample is re-drawn, while nearly all of the small- and medium-sized firms from...); Educational Services (NAICS 61); Health Care and Social Assistance (NAICS 62); Arts, Entertainment, and...

  16. Technology Tips: Sample Too Small? Probably Not!

    ERIC Educational Resources Information Center

    Strayer, Jeremy F.

    2013-01-01

    Statistical studies are referenced in the news every day, so frequently that people are sometimes skeptical of reported results. Often, no matter how large a sample size researchers use in their studies, people believe that the sample size is too small to make broad generalizations. The tasks presented in this article use simulations of repeated…

  17. A comparison of confidence/credible interval methods for the area under the ROC curve for continuous diagnostic tests with small sample size.

    PubMed

    Feng, Dai; Cortese, Giuliana; Baumgartner, Richard

    2017-12-01

    The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.

  18. What is the extent of prokaryotic diversity?

    PubMed Central

    Curtis, Thomas P; Head, Ian M; Lunn, Mary; Woodcock, Stephen; Schloss, Patrick D; Sloan, William T

    2006-01-01

    The extent of microbial diversity is an intrinsically fascinating subject of profound practical importance. The term ‘diversity’ may allude to the number of taxa or species richness as well as their relative abundance. There is uncertainty about both, primarily because sample sizes are too small. Non-parametric diversity estimators make gross underestimates if used with small sample sizes on unevenly distributed communities. One can make richness estimates over many scales using small samples by assuming a species/taxa-abundance distribution. However, no one knows what the underlying taxa-abundance distributions are for bacterial communities. Latterly, diversity has been estimated by fitting data from gene clone libraries and extrapolating from this to taxa-abundance curves to estimate richness. However, since sample sizes are small, we cannot be sure that such samples are representative of the community from which they were drawn. It is however possible to formulate, and calibrate, models that predict the diversity of local communities and of samples drawn from that local community. The calibration of such models suggests that migration rates are small and decrease as the community gets larger. The preliminary predictions of the model are qualitatively consistent with the patterns seen in clone libraries in ‘real life’. The validation of this model is also confounded by small sample sizes. However, if such models were properly validated, they could form invaluable tools for the prediction of microbial diversity and a basis for the systematic exploration of microbial diversity on the planet. PMID:17028084

  19. A reliability evaluation methodology for memory chips for space applications when sample size is small

    NASA Technical Reports Server (NTRS)

    Chen, Y.; Nguyen, D.; Guertin, S.; Berstein, J.; White, M.; Menke, R.; Kayali, S.

    2003-01-01

    This paper presents a reliability evaluation methodology to obtain the statistical reliability information of memory chips for space applications when the test sample size needs to be kept small because of the high cost of the radiation hardness memories.

  20. Confidence intervals for the population mean tailored to small sample sizes, with applications to survey sampling.

    PubMed

    Rosenblum, Michael A; Laan, Mark J van der

    2009-01-07

    The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes. We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability under much weaker assumptions than are required for standard methods. A drawback of this approach, as we show, is that these confidence intervals are often quite wide. In response to this, we present a method for constructing much narrower confidence intervals, which are better suited for practical applications, and that are still more robust than confidence intervals based on standard methods, when dealing with small sample sizes. We show how to extend our approaches to much more general estimation problems than estimating the sample mean. We describe how these methods can be used to obtain more reliable confidence intervals in survey sampling. As a concrete example, we construct confidence intervals using our methods for the number of violent deaths between March 2003 and July 2006 in Iraq, based on data from the study "Mortality after the 2003 invasion of Iraq: A cross sectional cluster sample survey," by Burnham et al. (2006).

  1. The impact of sample size on the reproducibility of voxel-based lesion-deficit mappings.

    PubMed

    Lorca-Puls, Diego L; Gajardo-Vidal, Andrea; White, Jitrachote; Seghier, Mohamed L; Leff, Alexander P; Green, David W; Crinion, Jenny T; Ludersdorfer, Philipp; Hope, Thomas M H; Bowman, Howard; Price, Cathy J

    2018-07-01

    This study investigated how sample size affects the reproducibility of findings from univariate voxel-based lesion-deficit analyses (e.g., voxel-based lesion-symptom mapping and voxel-based morphometry). Our effect of interest was the strength of the mapping between brain damage and speech articulation difficulties, as measured in terms of the proportion of variance explained. First, we identified a region of interest by searching on a voxel-by-voxel basis for brain areas where greater lesion load was associated with poorer speech articulation using a large sample of 360 right-handed English-speaking stroke survivors. We then randomly drew thousands of bootstrap samples from this data set that included either 30, 60, 90, 120, 180, or 360 patients. For each resample, we recorded effect size estimates and p values after conducting exactly the same lesion-deficit analysis within the previously identified region of interest and holding all procedures constant. The results show (1) how often small effect sizes in a heterogeneous population fail to be detected; (2) how effect size and its statistical significance varies with sample size; (3) how low-powered studies (due to small sample sizes) can greatly over-estimate as well as under-estimate effect sizes; and (4) how large sample sizes (N ≥ 90) can yield highly significant p values even when effect sizes are so small that they become trivial in practical terms. The implications of these findings for interpreting the results from univariate voxel-based lesion-deficit analyses are discussed. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  2. Small studies may overestimate the effect sizes in critical care meta-analyses: a meta-epidemiological study

    PubMed Central

    2013-01-01

    Introduction Small-study effects refer to the fact that trials with limited sample sizes are more likely to report larger beneficial effects than large trials. However, this has never been investigated in critical care medicine. Thus, the present study aimed to examine the presence and extent of small-study effects in critical care medicine. Methods Critical care meta-analyses involving randomized controlled trials and reported mortality as an outcome measure were considered eligible for the study. Component trials were classified as large (≥100 patients per arm) and small (<100 patients per arm) according to their sample sizes. Ratio of odds ratio (ROR) was calculated for each meta-analysis and then RORs were combined using a meta-analytic approach. ROR<1 indicated larger beneficial effect in small trials. Small and large trials were compared in methodological qualities including sequence generating, blinding, allocation concealment, intention to treat and sample size calculation. Results A total of 27 critical care meta-analyses involving 317 trials were included. Of them, five meta-analyses showed statistically significant RORs <1, and other meta-analyses did not reach a statistical significance. Overall, the pooled ROR was 0.60 (95% CI: 0.53 to 0.68); the heterogeneity was moderate with an I2 of 50.3% (chi-squared = 52.30; P = 0.002). Large trials showed significantly better reporting quality than small trials in terms of sequence generating, allocation concealment, blinding, intention to treat, sample size calculation and incomplete follow-up data. Conclusions Small trials are more likely to report larger beneficial effects than large trials in critical care medicine, which could be partly explained by the lower methodological quality in small trials. Caution should be practiced in the interpretation of meta-analyses involving small trials. PMID:23302257

  3. Probability of coincidental similarity among the orbits of small bodies - I. Pairing

    NASA Astrophysics Data System (ADS)

    Jopek, Tadeusz Jan; Bronikowska, Małgorzata

    2017-09-01

    Probability of coincidental clustering among orbits of comets, asteroids and meteoroids depends on many factors like: the size of the orbital sample searched for clusters or the size of the identified group, it is different for groups of 2,3,4,… members. Probability of coincidental clustering is assessed by the numerical simulation, therefore, it depends also on the method used for the synthetic orbits generation. We have tested the impact of some of these factors. For a given size of the orbital sample we have assessed probability of random pairing among several orbital populations of different sizes. We have found how these probabilities vary with the size of the orbital samples. Finally, keeping fixed size of the orbital sample we have shown that the probability of random pairing can be significantly different for the orbital samples obtained by different observation techniques. Also for the user convenience we have obtained several formulae which, for given size of the orbital sample can be used to calculate the similarity threshold corresponding to the small value of the probability of coincidental similarity among two orbits.

  4. A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models

    ERIC Educational Resources Information Center

    Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.

    2013-01-01

    Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…

  5. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Big assumptions for small samples in crop insurance

    Treesearch

    Ashley Elaine Hungerford; Barry Goodwin

    2014-01-01

    The purpose of this paper is to investigate the effects of crop insurance premiums being determined by small samples of yields that are spatially correlated. If spatial autocorrelation and small sample size are not properly accounted for in premium ratings, the premium rates may inaccurately reflect the risk of a loss.

  7. Topological Analysis and Gaussian Decision Tree: Effective Representation and Classification of Biosignals of Small Sample Size.

    PubMed

    Zhang, Zhifei; Song, Yang; Cui, Haochen; Wu, Jayne; Schwartz, Fernando; Qi, Hairong

    2017-09-01

    Bucking the trend of big data, in microdevice engineering, small sample size is common, especially when the device is still at the proof-of-concept stage. The small sample size, small interclass variation, and large intraclass variation, have brought biosignal analysis new challenges. Novel representation and classification approaches need to be developed to effectively recognize targets of interests with the absence of a large training set. Moving away from the traditional signal analysis in the spatiotemporal domain, we exploit the biosignal representation in the topological domain that would reveal the intrinsic structure of point clouds generated from the biosignal. Additionally, we propose a Gaussian-based decision tree (GDT), which can efficiently classify the biosignals even when the sample size is extremely small. This study is motivated by the application of mastitis detection using low-voltage alternating current electrokinetics (ACEK) where five categories of bisignals need to be recognized with only two samples in each class. Experimental results demonstrate the robustness of the topological features as well as the advantage of GDT over some conventional classifiers in handling small dataset. Our method reduces the voltage of ACEK to a safe level and still yields high-fidelity results with a short assay time. This paper makes two distinctive contributions to the field of biosignal analysis, including performing signal processing in the topological domain and handling extremely small dataset. Currently, there have been no related works that can efficiently tackle the dilemma between avoiding electrochemical reaction and accelerating assay process using ACEK.

  8. Vitamin D receptor gene and osteoporosis - author`s response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Looney, J.E.; Yoon, Hyun Koo; Fischer, M.

    1996-04-01

    We appreciate the comments of Dr. Nguyen et al. about our recent study, but we disagree with their suggestion that the lack of an association between low bone density and the BB VDR genotype, which we reported, is an artifact generated by the small sample size. Furthermore, our results are consistent with similar conclusions reached by a number of other investigators, as recently reported by Peacock. Peacock states {open_quotes}Taken as a whole, the results of studies outlined ... indicate that VDR alleles, cannot account for the major part of the heritable component of bone density as indicated by Morrison etmore » al.{close_quotes}. The majority of the 17 studies cited in this editorial could not confirm an association between the VDR genotype and the bone phenotype. Surely one cannot criticize this combined work as representing an artifact because of a too small sample size. We do not dispute the suggestion by Nguyen et al. that large sample sizes are required to analyze small biological effects. This is evident in both Peacock`s summary and in their own bone density studies. We did not design our study with a larger sample size because, based on the work of Morrison et al., we had hypothesized a large biological effect; large sample sizes are only needed for small biological effects. 4 refs.« less

  9. How accurate is the Pearson r-from-Z approximation? A Monte Carlo simulation study.

    PubMed

    Hittner, James B; May, Kim

    2012-01-01

    The Pearson r-from-Z approximation estimates the sample correlation (as an effect size measure) from the ratio of two quantities: the standard normal deviate equivalent (Z-score) corresponding to a one-tailed p-value divided by the square root of the total (pooled) sample size. The formula has utility in meta-analytic work when reports of research contain minimal statistical information. Although simple to implement, the accuracy of the Pearson r-from-Z approximation has not been empirically evaluated. To address this omission, we performed a series of Monte Carlo simulations. Results indicated that in some cases the formula did accurately estimate the sample correlation. However, when sample size was very small (N = 10) and effect sizes were small to small-moderate (ds of 0.1 and 0.3), the Pearson r-from-Z approximation was very inaccurate. Detailed figures that provide guidance as to when the Pearson r-from-Z formula will likely yield valid inferences are presented.

  10. Simulation analyses of space use: Home range estimates, variability, and sample size

    USGS Publications Warehouse

    Bekoff, Marc; Mech, L. David

    1984-01-01

    Simulations of space use by animals were run to determine the relationship among home range area estimates, variability, and sample size (number of locations). As sample size increased, home range size increased asymptotically, whereas variability decreased among mean home range area estimates generated by multiple simulations for the same sample size. Our results suggest that field workers should ascertain between 100 and 200 locations in order to estimate reliably home range area. In some cases, this suggested guideline is higher than values found in the few published studies in which the relationship between home range area and number of locations is addressed. Sampling differences for small species occupying relatively small home ranges indicate that fewer locations may be sufficient to allow for a reliable estimate of home range. Intraspecific variability in social status (group member, loner, resident, transient), age, sex, reproductive condition, and food resources also have to be considered, as do season, habitat, and differences in sampling and analytical methods. Comparative data still are needed.

  11. Monitoring Species of Concern Using Noninvasive Genetic Sampling and Capture-Recapture Methods

    DTIC Science & Technology

    2016-11-01

    ABBREVIATIONS AICc Akaike’s Information Criterion with small sample size correction AZGFD Arizona Game and Fish Department BMGR Barry M. Goldwater...MNKA Minimum Number Known Alive N Abundance Ne Effective Population Size NGS Noninvasive Genetic Sampling NGS-CR Noninvasive Genetic...parameter estimates from capture-recapture models require sufficient sample sizes , capture probabilities and low capture biases. For NGS-CR, sample

  12. Estimating accuracy of land-cover composition from two-stage cluster sampling

    USGS Publications Warehouse

    Stehman, S.V.; Wickham, J.D.; Fattorini, L.; Wade, T.D.; Baffetta, F.; Smith, J.H.

    2009-01-01

    Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), root mean square error (RMSE), and correlation (CORR) to quantify accuracy of land-cover composition for a general two-stage cluster sampling design, and for the special case of simple random sampling without replacement (SRSWOR) at each stage. The bias of the estimators for the two-stage SRSWOR design is evaluated via a simulation study. The estimators of RMSE and CORR have small bias except when sample size is small and the land-cover class is rare. The estimator of MAD is biased for both rare and common land-cover classes except when sample size is large. A general recommendation is that rare land-cover classes require large sample sizes to ensure that the accuracy estimators have small bias. ?? 2009 Elsevier Inc.

  13. Global Sensitivity Analysis with Small Sample Sizes: Ordinary Least Squares Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Michael J.; Liu, Wei; Sivaramakrishnan, Raghu

    2016-12-21

    A new version of global sensitivity analysis is developed in this paper. This new version coupled with tools from statistics, machine learning, and optimization can devise small sample sizes that allow for the accurate ordering of sensitivity coefficients for the first 10-30 most sensitive chemical reactions in complex chemical-kinetic mechanisms, and is particularly useful for studying the chemistry in realistic devices. A key part of the paper is calibration of these small samples. Because these small sample sizes are developed for use in realistic combustion devices, the calibration is done over the ranges of conditions in such devices, with amore » test case being the operating conditions of a compression ignition engine studied earlier. Compression ignition engines operate under low-temperature combustion conditions with quite complicated chemistry making this calibration difficult, leading to the possibility of false positives and false negatives in the ordering of the reactions. So an important aspect of the paper is showing how to handle the trade-off between false positives and false negatives using ideas from the multiobjective optimization literature. The combination of the new global sensitivity method and the calibration are sample sizes a factor of approximately 10 times smaller than were available with our previous algorithm.« less

  14. Support vector regression to predict porosity and permeability: Effect of sample size

    NASA Astrophysics Data System (ADS)

    Al-Anazi, A. F.; Gates, I. D.

    2012-02-01

    Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function type and loss functions used.

  15. Post-stratified estimation: with-in strata and total sample size recommendations

    Treesearch

    James A. Westfall; Paul L. Patterson; John W. Coulston

    2011-01-01

    Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...

  16. Large sample area and size are needed for forest soil seed bank studies to ensure low discrepancy with standing vegetation.

    PubMed

    Shen, You-xin; Liu, Wei-li; Li, Yu-hui; Guan, Hui-lin

    2014-01-01

    A large number of small-sized samples invariably shows that woody species are absent from forest soil seed banks, leading to a large discrepancy with the seedling bank on the forest floor. We ask: 1) Does this conventional sampling strategy limit the detection of seeds of woody species? 2) Are large sample areas and sample sizes needed for higher recovery of seeds of woody species? We collected 100 samples that were 10 cm (length) × 10 cm (width) × 10 cm (depth), referred to as larger number of small-sized samples (LNSS) in a 1 ha forest plot, and placed them to germinate in a greenhouse, and collected 30 samples that were 1 m × 1 m × 10 cm, referred to as small number of large-sized samples (SNLS) and placed them (10 each) in a nearby secondary forest, shrub land and grass land. Only 15.7% of woody plant species of the forest stand were detected by the 100 LNSS, contrasting with 22.9%, 37.3% and 20.5% woody plant species being detected by SNLS in the secondary forest, shrub land and grassland, respectively. The increased number of species vs. sampled areas confirmed power-law relationships for forest stand, the LNSS and SNLS at all three recipient sites. Our results, although based on one forest, indicate that conventional LNSS did not yield a high percentage of detection for woody species, but SNLS strategy yielded a higher percentage of detection for woody species in the seed bank if samples were exposed to a better field germination environment. A 4 m2 minimum sample area derived from power equations is larger than the sampled area in most studies in the literature. Increased sample size also is needed to obtain an increased sample area if the number of samples is to remain relatively low.

  17. Study of the gel films of Acetobacter Xylinum cellulose and its modified samples by {sup 1}H NMR cryoporometry and small-angle X-ray scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Babushkina, T. A.; Klimova, T. P.; Shtykova, E. V.

    2010-03-15

    Gel films of Acetobacter Xylinum cellulose and its modified samples have been investigated by 1H nuclear magnetic resonance (NMR) cryoporometry and small-angle X-ray scattering. The joint use of these two methods made it possible to characterize the sizes of aqueous pores in gel films and estimate the sizes of structural inhomogeneities before and after the sorption of polyvinylpyrrolidone and Se{sub 0} nanoparticles (stabilized by polyvinylpyrrolidone) into the films. According to small-angle X-ray scattering data, the sizes of inhomogeneities in a gel film change only slightly upon the sorption of polyvinylpyrrolidone and nanoparticles. The impregnated material is sorbed into water-filled cavitiesmore » that are present in the gel film. {sup 1}H NMR cryoporometry allowed us to reveal the details of changes in the sizes of small aqueous pores during modifications.« less

  18. Using Data-Dependent Priors to Mitigate Small Sample Bias in Latent Growth Models: A Discussion and Illustration Using M"plus"

    ERIC Educational Resources Information Center

    McNeish, Daniel M.

    2016-01-01

    Mixed-effects models (MEMs) and latent growth models (LGMs) are often considered interchangeable save the discipline-specific nomenclature. Software implementations of these models, however, are not interchangeable, particularly with small sample sizes. Restricted maximum likelihood estimation that mitigates small sample bias in MEMs has not been…

  19. Designing a two-rank acceptance sampling plan for quality inspection of geospatial data products

    NASA Astrophysics Data System (ADS)

    Tong, Xiaohua; Wang, Zhenhua; Xie, Huan; Liang, Dan; Jiang, Zuoqin; Li, Jinchao; Li, Jun

    2011-10-01

    To address the disadvantages of classical sampling plans designed for traditional industrial products, we originally propose a two-rank acceptance sampling plan (TRASP) for the inspection of geospatial data outputs based on the acceptance quality level (AQL). The first rank sampling plan is to inspect the lot consisting of map sheets, and the second is to inspect the lot consisting of features in an individual map sheet. The TRASP design is formulated as an optimization problem with respect to sample size and acceptance number, which covers two lot size cases. The first case is for a small lot size with nonconformities being modeled by a hypergeometric distribution function, and the second is for a larger lot size with nonconformities being modeled by a Poisson distribution function. The proposed TRASP is illustrated through two empirical case studies. Our analysis demonstrates that: (1) the proposed TRASP provides a general approach for quality inspection of geospatial data outputs consisting of non-uniform items and (2) the proposed acceptance sampling plan based on TRASP performs better than other classical sampling plans. It overcomes the drawbacks of percent sampling, i.e., "strictness for large lot size, toleration for small lot size," and those of a national standard used specifically for industrial outputs, i.e., "lots with different sizes corresponding to the same sampling plan."

  20. Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology

    PubMed Central

    Vavrek, Matthew J.

    2015-01-01

    Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770

  1. Internal pilots for a class of linear mixed models with Gaussian and compound symmetric data

    PubMed Central

    Gurka, Matthew J.; Coffey, Christopher S.; Muller, Keith E.

    2015-01-01

    SUMMARY An internal pilot design uses interim sample size analysis, without interim data analysis, to adjust the final number of observations. The approach helps to choose a sample size sufficiently large (to achieve the statistical power desired), but not too large (which would waste money and time). We report on recent research in cerebral vascular tortuosity (curvature in three dimensions) which would benefit greatly from internal pilots due to uncertainty in the parameters of the covariance matrix used for study planning. Unfortunately, observations correlated across the four regions of the brain and small sample sizes preclude using existing methods. However, as in a wide range of medical imaging studies, tortuosity data have no missing or mistimed data, a factorial within-subject design, the same between-subject design for all responses, and a Gaussian distribution with compound symmetry. For such restricted models, we extend exact, small sample univariate methods for internal pilots to linear mixed models with any between-subject design (not just two groups). Planning a new tortuosity study illustrates how the new methods help to avoid sample sizes that are too small or too large while still controlling the type I error rate. PMID:17318914

  2. Optimizing the triple-axis spectrometer PANDA at the MLZ for small samples and complex sample environment conditions

    NASA Astrophysics Data System (ADS)

    Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.

    2016-11-01

    The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.

  3. Robust functional statistics applied to Probability Density Function shape screening of sEMG data.

    PubMed

    Boudaoud, S; Rix, H; Al Harrach, M; Marin, F

    2014-01-01

    Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.

  4. Effects of sample size on estimates of population growth rates calculated with matrix models.

    PubMed

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  5. Small Sample Sizes Yield Biased Allometric Equations in Temperate Forests

    PubMed Central

    Duncanson, L.; Rourke, O.; Dubayah, R.

    2015-01-01

    Accurate quantification of forest carbon stocks is required for constraining the global carbon cycle and its impacts on climate. The accuracies of forest biomass maps are inherently dependent on the accuracy of the field biomass estimates used to calibrate models, which are generated with allometric equations. Here, we provide a quantitative assessment of the sensitivity of allometric parameters to sample size in temperate forests, focusing on the allometric relationship between tree height and crown radius. We use LiDAR remote sensing to isolate between 10,000 to more than 1,000,000 tree height and crown radius measurements per site in six U.S. forests. We find that fitted allometric parameters are highly sensitive to sample size, producing systematic overestimates of height. We extend our analysis to biomass through the application of empirical relationships from the literature, and show that given the small sample sizes used in common allometric equations for biomass, the average site-level biomass bias is ~+70% with a standard deviation of 71%, ranging from −4% to +193%. These findings underscore the importance of increasing the sample sizes used for allometric equation generation. PMID:26598233

  6. Meta-analysis of genome-wide association from genomic prediction models

    USDA-ARS?s Scientific Manuscript database

    A limitation of many genome-wide association studies (GWA) in animal breeding is that there are many loci with small effect sizes; thus, larger sample sizes (N) are required to guarantee suitable power of detection. To increase sample size, results from different GWA can be combined in a meta-analys...

  7. Small renal size in newborns with spina bifida: possible causes.

    PubMed

    Montaldo, Paolo; Montaldo, Luisa; Iossa, Azzurra Concetta; Cennamo, Marina; Caredda, Elisabetta; Del Gado, Roberto

    2014-02-01

    Previous studies reported that children with neural tube defects, but without any history of intrinsic renal diseases, have small kidneys when compared with age-matched standard renal growth. The aim of this study was to investigate the possible causes of small renal size in children with spina bifida by comparing growth hormone deficiency, physical limitations and hyperhomocysteinemia. The sample included 187 newborns with spina bifida. Renal sizes in the patients were assessed by using maximum measurement of renal length and the measurements were compared by using the Sutherland monogram. According to the results, the sample was divided into two groups--a group of 120 patients with small kidneys (under the third percentile) and a control group of 67 newborns with normal kidney size. Plasma total homocysteine was investigated in mothers and in their children. Serum insulin-like growth factor-1 (IGF-1) levels were measured. Serum IGF-1 levels were normal in both groups. Children and mothers with homocysteine levels >10 μmol/l were more than twice as likely to have small kidneys and to give to birth children with small kidneys, respectively, compared with newborns and mothers with homocysteine levels <10 μmol/l. An inverse correlation was also found between the homocysteine levels of mothers and kidney sizes of children (r = - 0.6109 P ≤ 0.01). It is highly important for mothers with hyperhomocysteinemia to be educated about benefits of folate supplementation in order to reduce the risk of small renal size and lower renal function in children.

  8. Drying step optimization to obtain large-size transparent magnesium-aluminate spinel samples

    NASA Astrophysics Data System (ADS)

    Petit, Johan; Lallemant, Lucile

    2017-05-01

    In the transparent ceramics processing, the green body elaboration step is probably the most critical one. Among the known techniques, wet shaping processes are particularly interesting because they enable the particles to find an optimum position on their own. Nevertheless, the presence of water molecules leads to drying issues. During the water removal, its concentration gradient induces cracks limiting the sample size: laboratory samples are generally less damaged because of their small size but upscaling the samples for industrial applications lead to an increasing cracking probability. Thanks to the drying step optimization, large size spinel samples were obtained.

  9. Recommended protocols for sampling macrofungi

    Treesearch

    Gregory M. Mueller; John Paul Schmit; Sabine M. Hubndorf Leif Ryvarden; Thomas E. O' Dell; D. Jean Lodge; Patrick R. Leacock; Milagro Mata; Loengrin Umania; Qiuxin (Florence) Wu; Daniel L. Czederpiltz

    2004-01-01

    This chapter discusses several issues regarding reommended protocols for sampling macrofungi: Opportunistic sampling of macrofungi, sampling conspicuous macrofungi using fixed-size, sampling small Ascomycetes using microplots, and sampling a fixed number of downed logs.

  10. Polymorphism in magic-sized Au144(SR)60 clusters

    NASA Astrophysics Data System (ADS)

    Jensen, Kirsten M. Ø.; Juhas, Pavol; Tofanelli, Marcus A.; Heinecke, Christine L.; Vaughan, Gavin; Ackerson, Christopher J.; Billinge, Simon J. L.

    2016-06-01

    Ultra-small, magic-sized metal nanoclusters represent an important new class of materials with properties between molecules and particles. However, their small size challenges the conventional methods for structure characterization. Here we present the structure of ultra-stable Au144(SR)60 magic-sized nanoclusters obtained from atomic pair distribution function analysis of X-ray powder diffraction data. The study reveals structural polymorphism in these archetypal nanoclusters. In addition to confirming the theoretically predicted icosahedral-cored cluster, we also find samples with a truncated decahedral core structure, with some samples exhibiting a coexistence of both cluster structures. Although the clusters are monodisperse in size, structural diversity is apparent. The discovery of polymorphism may open up a new dimension in nanoscale engineering.

  11. Reexamining Sample Size Requirements for Multivariate, Abundance-Based Community Research: When Resources are Limited, the Research Does Not Have to Be.

    PubMed

    Forcino, Frank L; Leighton, Lindsey R; Twerdy, Pamela; Cahill, James F

    2015-01-01

    Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment.

  12. A microfluidic platform for precision small-volume sample processing and its use to size separate biological particles with an acoustic microdevice [Precision size separation of biological particles in small-volume samples by an acoustic microfluidic system

    DOE PAGES

    Fong, Erika J.; Huang, Chao; Hamilton, Julie; ...

    2015-11-23

    Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection,more » system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.« less

  13. Determining the sample size for co-dominant molecular marker-assisted linkage detection for a monogenic qualitative trait by controlling the type-I and type-II errors in a segregating F2 population.

    PubMed

    Hühn, M; Piepho, H P

    2003-03-01

    Tests for linkage are usually performed using the lod score method. A critical question in linkage analyses is the choice of sample size. The appropriate sample size depends on the desired type-I error and power of the test. This paper investigates the exact type-I error and power of the lod score method in a segregating F(2) population with co-dominant markers and a qualitative monogenic dominant-recessive trait. For illustration, a disease-resistance trait is considered, where the susceptible allele is recessive. A procedure is suggested for finding the appropriate sample size. It is shown that recessive plants have about twice the information content of dominant plants, so the former should be preferred for linkage detection. In some cases the exact alpha-values for a given nominal alpha may be rather small due to the discrete nature of the sampling distribution in small samples. We show that a gain in power is possible by using exact methods.

  14. Investigating Test Equating Methods in Small Samples through Various Factors

    ERIC Educational Resources Information Center

    Asiret, Semih; Sünbül, Seçil Ömür

    2016-01-01

    In this study, equating methods for random group design using small samples through factors such as sample size, difference in difficulty between forms, and guessing parameter was aimed for comparison. Moreover, which method gives better results under which conditions was also investigated. In this study, 5,000 dichotomous simulated data…

  15. Improving the analysis of composite endpoints in rare disease trials.

    PubMed

    McMenamin, Martina; Berglind, Anna; Wason, James M S

    2018-05-22

    Composite endpoints are recommended in rare diseases to increase power and/or to sufficiently capture complexity. Often, they are in the form of responder indices which contain a mixture of continuous and binary components. Analyses of these outcomes typically treat them as binary, thus only using the dichotomisations of continuous components. The augmented binary method offers a more efficient alternative and is therefore especially useful for rare diseases. Previous work has indicated the method may have poorer statistical properties when the sample size is small. Here we investigate small sample properties and implement small sample corrections. We re-sample from a previous trial with sample sizes varying from 30 to 80. We apply the standard binary and augmented binary methods and determine the power, type I error rate, coverage and average confidence interval width for each of the estimators. We implement Firth's adjustment for the binary component models and a small sample variance correction for the generalized estimating equations, applying the small sample adjusted methods to each sub-sample as before for comparison. For the log-odds treatment effect the power of the augmented binary method is 20-55% compared to 12-20% for the standard binary method. Both methods have approximately nominal type I error rates. The difference in response probabilities exhibit similar power but both unadjusted methods demonstrate type I error rates of 6-8%. The small sample corrected methods have approximately nominal type I error rates. On both scales, the reduction in average confidence interval width when using the adjusted augmented binary method is 17-18%. This is equivalent to requiring a 32% smaller sample size to achieve the same statistical power. The augmented binary method with small sample corrections provides a substantial improvement for rare disease trials using composite endpoints. We recommend the use of the method for the primary analysis in relevant rare disease trials. We emphasise that the method should be used alongside other efforts in improving the quality of evidence generated from rare disease trials rather than replace them.

  16. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    PubMed

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Reversing the Signaled Magnitude Effect in Delayed Matching to Sample: Delay-Specific Remembering?

    ERIC Educational Resources Information Center

    White, K. Geoffrey; Brown, Glenn S.

    2011-01-01

    Pigeons performed a delayed matching-to-sample task in which large or small reinforcers for correct remembering were signaled during the retention interval. Accuracy was low when small reinforcers were signaled, and high when large reinforcers were signaled (the signaled magnitude effect). When the reinforcer-size cue was switched from small to…

  18. TableSim--A program for analysis of small-sample categorical data.

    Treesearch

    David J. Rugg

    2003-01-01

    Documents a computer program for calculating correct P-values of 1-way and 2-way tables when sample sizes are small. The program is written in Fortran 90; the executable code runs in 32-bit Microsoft-- command line environments.

  19. Polymorphism in magic-sized Au144(SR)60 clusters

    DOE PAGES

    Jensen, Kirsten M. O.; Juhas, Pavol; Tofanelli, Marcus A.; ...

    2016-06-14

    Ultra-small, magic-sized metal nanoclusters represent an important new class of materials with properties between molecules and particles. However, their small size challenges the conventional methods for structure characterization. We present the structure of ultra-stable Au144(SR)60 magic-sized nanoclusters obtained from atomic pair distribution function analysis of X-ray powder diffraction data. Our study reveals structural polymorphism in these archetypal nanoclusters. Additionally, in order to confirm the theoretically predicted icosahedral-cored cluster, we also find samples with a truncated decahedral core structure, with some samples exhibiting a coexistence of both cluster structures. Although the clusters are monodisperse in size, structural diversity is apparent. Finally,more » the discovery of polymorphism may open up a new dimension in nanoscale engineering.« less

  20. Got power? A systematic review of sample size adequacy in health professions education research.

    PubMed

    Cook, David A; Hatala, Rose

    2015-03-01

    Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011, and included all studies evaluating simulation-based education for health professionals in comparison with no intervention or another simulation intervention. Reviewers working in duplicate abstracted information to calculate standardized mean differences (SMD's). We included 897 original research studies. Among the 627 no-intervention-comparison studies the median sample size was 25. Only two studies (0.3%) had ≥80% power to detect a small difference (SMD > 0.2 standard deviations) and 136 (22%) had power to detect a large difference (SMD > 0.8). 110 no-intervention-comparison studies failed to find a statistically significant difference, but none excluded a small difference and only 47 (43%) excluded a large difference. Among 297 studies comparing alternate simulation approaches the median sample size was 30. Only one study (0.3%) had ≥80% power to detect a small difference and 79 (27%) had power to detect a large difference. Of the 128 studies that did not detect a statistically significant effect, 4 (3%) excluded a small difference and 91 (71%) excluded a large difference. In conclusion, most education research studies are powered only to detect effects of large magnitude. For most studies that do not reach statistical significance, the possibility of large and important differences still exists.

  1. Small-Sample DIF Estimation Using SIBTEST, Cochran's Z, and Log-Linear Smoothing

    ERIC Educational Resources Information Center

    Lei, Pui-Wa; Li, Hongli

    2013-01-01

    Minimum sample sizes of about 200 to 250 per group are often recommended for differential item functioning (DIF) analyses. However, there are times when sample sizes for one or both groups of interest are smaller than 200 due to practical constraints. This study attempts to examine the performance of Simultaneous Item Bias Test (SIBTEST),…

  2. Statistical aspects of genetic association testing in small samples, based on selective DNA pooling data in the arctic fox.

    PubMed

    Szyda, Joanna; Liu, Zengting; Zatoń-Dobrowolska, Magdalena; Wierzbicki, Heliodor; Rzasa, Anna

    2008-01-01

    We analysed data from a selective DNA pooling experiment with 130 individuals of the arctic fox (Alopex lagopus), which originated from 2 different types regarding body size. The association between alleles of 6 selected unlinked molecular markers and body size was tested by using univariate and multinomial logistic regression models, applying odds ratio and test statistics from the power divergence family. Due to the small sample size and the resulting sparseness of the data table, in hypothesis testing we could not rely on the asymptotic distributions of the tests. Instead, we tried to account for data sparseness by (i) modifying confidence intervals of odds ratio; (ii) using a normal approximation of the asymptotic distribution of the power divergence tests with different approaches for calculating moments of the statistics; and (iii) assessing P values empirically, based on bootstrap samples. As a result, a significant association was observed for 3 markers. Furthermore, we used simulations to assess the validity of the normal approximation of the asymptotic distribution of the test statistics under the conditions of small and sparse samples.

  3. Improvements to sample processing and measurement to enable more widespread environmental application of tritium.

    PubMed

    Moran, James; Alexander, Thomas; Aalseth, Craig; Back, Henning; Mace, Emily; Overman, Cory; Seifert, Allen; Freeburg, Wilcox

    2017-08-01

    Previous measurements have demonstrated the wealth of information that tritium (T) can provide on environmentally relevant processes. We present modifications to sample preparation approaches that enable T measurement by proportional counting on small sample sizes equivalent to 120mg of water and demonstrate the accuracy of these methods on a suite of standardized water samples. We identify a current quantification limit of 92.2 TU which, combined with our small sample sizes, correlates to as little as 0.00133Bq of total T activity. This enhanced method should provide the analytical flexibility needed to address persistent knowledge gaps in our understanding of both natural and artificial T behavior in the environment. Copyright © 2017. Published by Elsevier Ltd.

  4. Improvements to sample processing and measurement to enable more widespread environmental application of tritium

    DOE PAGES

    Moran, James; Alexander, Thomas; Aalseth, Craig; ...

    2017-01-26

    Previous measurements have demonstrated the wealth of information that tritium (T) can provide on environmentally relevant processes. Here, we present modifications to sample preparation approaches that enable T measurement by proportional counting on small sample sizes equivalent to 120 mg of water and demonstrate the accuracy of these methods on a suite of standardized water samples. We also identify a current quantification limit of 92.2 TU which, combined with our small sample sizes, correlates to as little as 0.00133 Bq of total T activity. Furthermore, this enhanced method should provide the analytical flexibility needed to address persistent knowledge gaps inmore » our understanding of both natural and artificial T behavior in the environment.« less

  5. Protection of obstetric dimensions in a small-bodied human sample.

    PubMed

    Kurki, Helen K

    2007-08-01

    In human females, the bony pelvis must find a balance between being small (narrow) for efficient bipedal locomotion, and being large to accommodate a relatively large newborn. It has been shown that within a given population, taller/larger-bodied women have larger pelvic canals. This study investigates whether in a population where small body size is the norm, pelvic geometry (size and shape), on average, shows accommodation to protect the obstetric canal. Osteometric data were collected from the pelves, femora, and clavicles (body size indicators) of adult skeletons representing a range of adult body size. Samples include Holocene Later Stone Age (LSA) foragers from southern Africa (n = 28 females, 31 males), Portuguese from the Coimbra-identified skeletal collection (CISC) (n = 40 females, 40 males) and European-Americans from the Hamann-Todd osteological collection (H-T) (n = 40 females, 40 males). Patterns of sexual dimorphism are similar in the samples. Univariate and multivariate analyses of raw and Mosimann shape-variables indicate that compared to the CISC and H-T females, the LSA females have relatively large midplane and outlet canal planes (particularly posterior and A-P lengths). The LSA males also follow this pattern, although with absolutely smaller pelves in multivariate space. The CISC females, who have equally small stature, but larger body mass, do not show the same type of pelvic canal size and shape accommodation. The results suggest that adaptive allometric modeling in at least some small-bodied populations protects the obstetric canal. These findings support the use of population-specific attributes in the clinical evaluation of obstetric risk. (c) 2007 Wiley-Liss, Inc.

  6. Sample size calculation for a proof of concept study.

    PubMed

    Yin, Yin

    2002-05-01

    Sample size calculation is vital for a confirmatory clinical trial since the regulatory agencies require the probability of making Type I error to be significantly small, usually less than 0.05 or 0.025. However, the importance of the sample size calculation for studies conducted by a pharmaceutical company for internal decision making, e.g., a proof of concept (PoC) study, has not received enough attention. This article introduces a Bayesian method that identifies the information required for planning a PoC and the process of sample size calculation. The results will be presented in terms of the relationships between the regulatory requirements, the probability of reaching the regulatory requirements, the goalpost for PoC, and the sample size used for PoC.

  7. Statistical issues in reporting quality data: small samples and casemix variation.

    PubMed

    Zaslavsky, A M

    2001-12-01

    To present two key statistical issues that arise in analysis and reporting of quality data. Casemix variation is relevant to quality reporting when the units being measured have differing distributions of patient characteristics that also affect the quality outcome. When this is the case, adjustment using stratification or regression may be appropriate. Such adjustments may be controversial when the patient characteristic does not have an obvious relationship to the outcome. Stratified reporting poses problems for sample size and reporting format, but may be useful when casemix effects vary across units. Although there are no absolute standards of reliability, high reliabilities (interunit F > or = 10 or reliability > or = 0.9) are desirable for distinguishing above- and below-average units. When small or unequal sample sizes complicate reporting, precision may be improved using indirect estimation techniques that incorporate auxiliary information, and 'shrinkage' estimation can help to summarize the strength of evidence about units with small samples. With broader understanding of casemix adjustment and methods for analyzing small samples, quality data can be analysed and reported more accurately.

  8. Carbon monoxide emission from small galaxies

    NASA Technical Reports Server (NTRS)

    Thronson, Harley A., Jr.; Bally, John

    1987-01-01

    A search was conducted for J = 1 yields 0 CO emission from 22 galaxies, detecting half, as part of a survey to study star formation in small to medium size galaxies. Although substantial variation was found in the star formation efficiencies of the sample galaxies, there is no apparent systematic trend with galaxy size.

  9. Compact ultrahigh vacuum sample environments for x-ray nanobeam diffraction and imaging.

    PubMed

    Evans, P G; Chahine, G; Grifone, R; Jacques, V L R; Spalenka, J W; Schülli, T U

    2013-11-01

    X-ray nanobeams present the opportunity to obtain structural insight in materials with small volumes or nanoscale heterogeneity. The effective spatial resolution of the information derived from nanobeam techniques depends on the stability and precision with which the relative position of the x-ray optics and sample can be controlled. Nanobeam techniques include diffraction, imaging, and coherent scattering, with applications throughout materials science and condensed matter physics. Sample positioning is a significant mechanical challenge for x-ray instrumentation providing vacuum or controlled gas environments at elevated temperatures. Such environments often have masses that are too large for nanopositioners capable of the required positional accuracy of the order of a small fraction of the x-ray spot size. Similarly, the need to place x-ray optics as close as 1 cm to the sample places a constraint on the overall size of the sample environment. We illustrate a solution to the mechanical challenge in which compact ion-pumped ultrahigh vacuum chambers with masses of 1-2 kg are integrated with nanopositioners. The overall size of the environment is sufficiently small to allow their use with zone-plate focusing optics. We describe the design of sample environments for elevated-temperature nanobeam diffraction experiments demonstrate in situ diffraction, reflectivity, and scanning nanobeam imaging of the ripening of Au crystallites on Si substrates.

  10. Compact ultrahigh vacuum sample environments for x-ray nanobeam diffraction and imaging

    NASA Astrophysics Data System (ADS)

    Evans, P. G.; Chahine, G.; Grifone, R.; Jacques, V. L. R.; Spalenka, J. W.; Schülli, T. U.

    2013-11-01

    X-ray nanobeams present the opportunity to obtain structural insight in materials with small volumes or nanoscale heterogeneity. The effective spatial resolution of the information derived from nanobeam techniques depends on the stability and precision with which the relative position of the x-ray optics and sample can be controlled. Nanobeam techniques include diffraction, imaging, and coherent scattering, with applications throughout materials science and condensed matter physics. Sample positioning is a significant mechanical challenge for x-ray instrumentation providing vacuum or controlled gas environments at elevated temperatures. Such environments often have masses that are too large for nanopositioners capable of the required positional accuracy of the order of a small fraction of the x-ray spot size. Similarly, the need to place x-ray optics as close as 1 cm to the sample places a constraint on the overall size of the sample environment. We illustrate a solution to the mechanical challenge in which compact ion-pumped ultrahigh vacuum chambers with masses of 1-2 kg are integrated with nanopositioners. The overall size of the environment is sufficiently small to allow their use with zone-plate focusing optics. We describe the design of sample environments for elevated-temperature nanobeam diffraction experiments demonstrate in situ diffraction, reflectivity, and scanning nanobeam imaging of the ripening of Au crystallites on Si substrates.

  11. Evaluation of residual uranium contamination in the dirt floor of an abandoned metal rolling mill.

    PubMed

    Glassford, Eric; Spitz, Henry; Lobaugh, Megan; Spitler, Grant; Succop, Paul; Rice, Carol

    2013-02-01

    A single, large, bulk sample of uranium-contaminated material from the dirt floor of an abandoned metal rolling mill was separated into different types and sizes of aliquots to simulate samples that would be collected during site remediation. The facility rolled approximately 11,000 tons of hot-forged ingots of uranium metal approximately 60 y ago, and it has not been used since that time. Thirty small mass (≈ 0.7 g) and 15 large mass (≈ 70 g) samples were prepared from the heterogeneously contaminated bulk material to determine how measurements of the uranium contamination vary with sample size. Aliquots of bulk material were also resuspended in an exposure chamber to produce six samples of respirable particles that were obtained using a cascade impactor. Samples of removable surface contamination were collected by wiping 100 cm of the interior surfaces of the exposure chamber with 47-mm-diameter fiber filters. Uranium contamination in each of the samples was measured directly using high-resolution gamma ray spectrometry. As expected, results for isotopic uranium (i.e., U and U) measured with the large-mass and small-mass samples are significantly different (p < 0.001), and the coefficient of variation (COV) for the small-mass samples was greater than for the large-mass samples. The uranium isotopic concentrations measured in the air and on the wipe samples were not significantly different and were also not significantly different (p > 0.05) from results for the large- or small-mass samples. Large-mass samples are more reliable for characterizing heterogeneously distributed radiological contamination than small-mass samples since they exhibit the least variation compared to the mean. Thus, samples should be sufficiently large in mass to insure that the results are truly representative of the heterogeneously distributed uranium contamination present at the facility. Monitoring exposure of workers and the public as a result of uranium contamination resuspended during site remediation should be evaluated using samples of sufficient size and type to accommodate the heterogeneous distribution of uranium in the bulk material.

  12. Accuracy or precision: Implications of sample design and methodology on abundance estimation

    USGS Publications Warehouse

    Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.

    2015-01-01

    Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.

  13. Sample-size needs for forestry herbicide trials

    Treesearch

    S.M. Zedaker; T.G. Gregoire; James H. Miller

    1994-01-01

    Forest herbicide experiments are increasingly being designed to evaluate smaller treatment differences when comparing existing effective treatments, tank mix ratios, surfactants, and new low-rate products. The ability to detect small differences in efficacy is dependent upon the relationship among sample size. type I and II error probabilities, and the coefficients of...

  14. On the Structure of Cortical Microcircuits Inferred from Small Sample Sizes.

    PubMed

    Vegué, Marina; Perin, Rodrigo; Roxin, Alex

    2017-08-30

    The structure in cortical microcircuits deviates from what would be expected in a purely random network, which has been seen as evidence of clustering. To address this issue, we sought to reproduce the nonrandom features of cortical circuits by considering several distinct classes of network topology, including clustered networks, networks with distance-dependent connectivity, and those with broad degree distributions. To our surprise, we found that all of these qualitatively distinct topologies could account equally well for all reported nonrandom features despite being easily distinguishable from one another at the network level. This apparent paradox was a consequence of estimating network properties given only small sample sizes. In other words, networks that differ markedly in their global structure can look quite similar locally. This makes inferring network structure from small sample sizes, a necessity given the technical difficulty inherent in simultaneous intracellular recordings, problematic. We found that a network statistic called the sample degree correlation (SDC) overcomes this difficulty. The SDC depends only on parameters that can be estimated reliably given small sample sizes and is an accurate fingerprint of every topological family. We applied the SDC criterion to data from rat visual and somatosensory cortex and discovered that the connectivity was not consistent with any of these main topological classes. However, we were able to fit the experimental data with a more general network class, of which all previous topologies were special cases. The resulting network topology could be interpreted as a combination of physical spatial dependence and nonspatial, hierarchical clustering. SIGNIFICANCE STATEMENT The connectivity of cortical microcircuits exhibits features that are inconsistent with a simple random network. Here, we show that several classes of network models can account for this nonrandom structure despite qualitative differences in their global properties. This apparent paradox is a consequence of the small numbers of simultaneously recorded neurons in experiment: when inferred via small sample sizes, many networks may be indistinguishable despite being globally distinct. We develop a connectivity measure that successfully classifies networks even when estimated locally with a few neurons at a time. We show that data from rat cortex is consistent with a network in which the likelihood of a connection between neurons depends on spatial distance and on nonspatial, asymmetric clustering. Copyright © 2017 the authors 0270-6474/17/378498-13$15.00/0.

  15. SIZE, STRUCTURE AND FUNCTIONALITY IN SHALLOW COVE COMMUNITIES IN RI

    EPA Science Inventory

    We are using an ecosystem approach to examine the ecological integrity and important habitats in small estuarine coves. We sampled the small undeveloped Coggeshall Cove during the sununer of 1999. The cove was sampled at high tide at every 15 cm of substrate elevation along trans...

  16. Particle size of sediments collected from the bed of the Amazon River and its tributaries in May and June 1977

    USGS Publications Warehouse

    Nordin, Carl F.; Meade, R.H.; Curtis, W.F.; Bosio, N.J.; Delaney, B.M.

    1979-01-01

    One-hundred-eight samples of bed material were collected from the Amazon River and its major tributaries between Belem, Brazil , and Iquitos, Peru. Samples were taken with a standard BM-54 sampler or with pipe dredges from May 18 to June 5, 1977. Most of the samples have median diameters in the size range of fine to medium sand and contain small percentages of fine gravel. Complete size distributions are tabulated. (Woodard-USGS)

  17. Particle size of sediments collected from the bed of the Amazon River and its tributaries in June and July 1976

    USGS Publications Warehouse

    Nordin, Carl F.; Meade, R.H.; Mahoney, H.A.; Delany, B.M.

    1977-01-01

    Sixty-five samples of bed material were collected from the Amazon River and its major tributaries between Belem, Brazil, and Iquitos, Peru. Samples were taken with a standard BM-54 sampler, a pipe dredge, or a Helley-Smith bedload sampler. Most of the samples have median diameters in the size range of fine to medium sand and contain small percentages of fine gravel. Complete size distributions are tabulated.

  18. A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies

    PubMed Central

    2014-01-01

    Background The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. Methods The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. Results The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. Conclusions If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used. PMID:24552686

  19. A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies.

    PubMed

    Kottas, Martina; Kuss, Oliver; Zapf, Antonia

    2014-02-19

    The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used.

  20. A Systematic Review of Surgical Randomized Controlled Trials: Part 2. Funding Source, Conflict of Interest, and Sample Size in Plastic Surgery.

    PubMed

    Voineskos, Sophocles H; Coroneos, Christopher J; Ziolkowski, Natalia I; Kaur, Manraj N; Banfield, Laura; Meade, Maureen O; Chung, Kevin C; Thoma, Achilleas; Bhandari, Mohit

    2016-02-01

    The authors examined industry support, conflict of interest, and sample size in plastic surgery randomized controlled trials that compared surgical interventions. They hypothesized that industry-funded trials demonstrate statistically significant outcomes more often, and randomized controlled trials with small sample sizes report statistically significant results more frequently. An electronic search identified randomized controlled trials published between 2000 and 2013. Independent reviewers assessed manuscripts and performed data extraction. Funding source, conflict of interest, primary outcome direction, and sample size were examined. Chi-squared and independent-samples t tests were used in the analysis. The search identified 173 randomized controlled trials, of which 100 (58 percent) did not acknowledge funding status. A relationship between funding source and trial outcome direction was not observed. Both funding status and conflict of interest reporting improved over time. Only 24 percent (six of 25) of industry-funded randomized controlled trials reported authors to have independent control of data and manuscript contents. The mean number of patients randomized was 73 per trial (median, 43, minimum, 3, maximum, 936). Small trials were not found to be positive more often than large trials (p = 0.87). Randomized controlled trials with small sample size were common; however, this provides great opportunity for the field to engage in further collaboration and produce larger, more definitive trials. Reporting of trial funding and conflict of interest is historically poor, but it greatly improved over the study period. Underreporting at author and journal levels remains a limitation when assessing the relationship between funding source and trial outcomes. Improved reporting and manuscript control should be goals that both authors and journals can actively achieve.

  1. Catch of channel catfish with tandem-set hoop nets and gill nets in lentic systems of Nebraska

    USGS Publications Warehouse

    Richters, Lindsey K.; Pope, Kevin L.

    2011-01-01

    Twenty-six Nebraska water bodies representing two ecosystem types (small standing waters and large standing waters) were surveyed during 2008 and 2009 with tandem-set hoop nets and experimental gill nets to determine if similar trends existed in catch rates and size structures of channel catfish Ictalurus punctatus captured with these gears. Gear efficiency was assessed as the number of sets (nets) that would be required to capture 100 channel catfish given observed catch per unit effort (CPUE). Efficiency of gill nets was not correlated with efficiency of hoop nets for capturing channel catfish. Small sample sizes prohibited estimation of proportional size distributions in most surveys; in the four surveys for which sample size was sufficient to quantify length-frequency distributions of captured channel catfish, distributions differed between gears. The CPUE of channel catfish did not differ between small and large water bodies for either gear. While catch rates of hoop nets were lower than rates recorded in previous studies, this gear was more efficient than gill nets at capturing channel catfish. However, comparisons of size structure between gears may be problematic.

  2. Bias in fallout data from nuclear surface shot SMALL BOY: an evaluation of sample perturbation by sieve sizing. Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pascual, J.N.

    1967-06-26

    Evaluation of sample bias introduced by the mechanical sieving of Small Boy fallout samples for 10 minutes revealed the following: Up to 20% of the mass and 30% of the gamma-ray activity can be lost from the large-particle (greater than 1400 microns) fraction. The pan fraction (less than 44 microns) can gain in weight by as much as 79%, and in activity by as much as 44%. The gamma-ray spectra of the fractions were not noticeably altered by the process. Examination of unbiased pan fractions (before mechanical sieving) indicated bimodality of the mass-size distribution in a sample collected 9,200 feetmore » from ground zero, but not in a sample collected at 13,300 feet.« less

  3. Damage Accumulation in Silica Glass Nanofibers.

    PubMed

    Bonfanti, Silvia; Ferrero, Ezequiel E; Sellerio, Alessandro L; Guerra, Roberto; Zapperi, Stefano

    2018-06-06

    The origin of the brittle-to-ductile transition, experimentally observed in amorphous silica nanofibers as the sample size is reduced, is still debated. Here we investigate the issue by extensive molecular dynamics simulations at low and room temperatures for a broad range of sample sizes, with open and periodic boundary conditions. Our results show that small sample-size enhanced ductility is primarily due to diffuse damage accumulation, that for larger samples leads to brittle catastrophic failure. Surface effects such as boundary fluidization contribute to ductility at room temperature by promoting necking, but are not the main driver of the transition. Our results suggest that the experimentally observed size-induced ductility of silica nanofibers is a manifestation of finite-size criticality, as expected in general for quasi-brittle disordered networks.

  4. Guiding of Plasmons and Phonons in Complex Three Dimensional Structures

    DTIC Science & Technology

    2013-01-01

    typical sample. We employed X - ray diffraction (XRD) to measure the average grain size across the entire depth of the sample over spot sizes Figure...propagation distance L as the 1/e decay length of the field intensity along x ...as well as the network layout with subwavelegth gap size and internode distance on the order of the effective wavelength, a small 2 x 2 resonant

  5. Exploratory Factor Analysis with Small Sample Sizes

    ERIC Educational Resources Information Center

    de Winter, J. C. F.; Dodou, D.; Wieringa, P. A.

    2009-01-01

    Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes ("N"), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for "N" below 50. Simulations were carried out to estimate the minimum required "N" for different…

  6. Application of SAXS and SANS in evaluation of porosity, pore size distribution and surface area of coal

    USGS Publications Warehouse

    Radlinski, A.P.; Mastalerz, Maria; Hinde, A.L.; Hainbuchner, M.; Rauch, H.; Baron, M.; Lin, J.S.; Fan, L.; Thiyagarajan, P.

    2004-01-01

    This paper discusses the applicability of small angle X-ray scattering (SAXS) and small angle neutron scattering (SANS) techniques for determining the porosity, pore size distribution and internal specific surface area in coals. The method is noninvasive, fast, inexpensive and does not require complex sample preparation. It uses coal grains of about 0.8 mm size mounted in standard pellets as used for petrographic studies. Assuming spherical pore geometry, the scattering data are converted into the pore size distribution in the size range 1 nm (10 A??) to 20 ??m (200,000 A??) in diameter, accounting for both open and closed pores. FTIR as well as SAXS and SANS data for seven samples of oriented whole coals and corresponding pellets with vitrinite reflectance (Ro) values in the range 0.55% to 5.15% are presented and analyzed. Our results demonstrate that pellets adequately represent the average microstructure of coal samples. The scattering data have been used to calculate the maximum surface area available for methane adsorption. Total porosity as percentage of sample volume is calculated and compared with worldwide trends. By demonstrating the applicability of SAXS and SANS techniques to determine the porosity, pore size distribution and surface area in coals, we provide a new and efficient tool, which can be used for any type of coal sample, from a thin slice to a representative sample of a thick seam. ?? 2004 Elsevier B.V. All rights reserved.

  7. Improved variance estimation of classification performance via reduction of bias caused by small sample size.

    PubMed

    Wickenberg-Bolin, Ulrika; Göransson, Hanna; Fryknäs, Mårten; Gustafsson, Mats G; Isaksson, Anders

    2006-03-13

    Supervised learning for classification of cancer employs a set of design examples to learn how to discriminate between tumors. In practice it is crucial to confirm that the classifier is robust with good generalization performance to new examples, or at least that it performs better than random guessing. A suggested alternative is to obtain a confidence interval of the error rate using repeated design and test sets selected from available examples. However, it is known that even in the ideal situation of repeated designs and tests with completely novel samples in each cycle, a small test set size leads to a large bias in the estimate of the true variance between design sets. Therefore different methods for small sample performance estimation such as a recently proposed procedure called Repeated Random Sampling (RSS) is also expected to result in heavily biased estimates, which in turn translates into biased confidence intervals. Here we explore such biases and develop a refined algorithm called Repeated Independent Design and Test (RIDT). Our simulations reveal that repeated designs and tests based on resampling in a fixed bag of samples yield a biased variance estimate. We also demonstrate that it is possible to obtain an improved variance estimate by means of a procedure that explicitly models how this bias depends on the number of samples used for testing. For the special case of repeated designs and tests using new samples for each design and test, we present an exact analytical expression for how the expected value of the bias decreases with the size of the test set. We show that via modeling and subsequent reduction of the small sample bias, it is possible to obtain an improved estimate of the variance of classifier performance between design sets. However, the uncertainty of the variance estimate is large in the simulations performed indicating that the method in its present form cannot be directly applied to small data sets.

  8. Vegetation patterns and abundances of amphibians and small mammals along small streams in a northwestern California watershed

    Treesearch

    Jeffrey R. Waters; Cynthia J. Zabel; Kevin S. McKelvey; Hartwell H. Welsh

    2001-01-01

    Our goal was to describe and evaluate patterns of association between stream size and abundances of amphibians and small mammals in a northwestern California watershed. We sampled populations at 42 stream sites and eight upland sites within a 100- watershed in 1995 and 1996. Stream reaches sampled ranged from poorly defined channels that rarely flowed to 10-m-wide...

  9. On the analysis of very small samples of Gaussian repeated measurements: an alternative approach.

    PubMed

    Westgate, Philip M; Burchett, Woodrow W

    2017-03-15

    The analysis of very small samples of Gaussian repeated measurements can be challenging. First, due to a very small number of independent subjects contributing outcomes over time, statistical power can be quite small. Second, nuisance covariance parameters must be appropriately accounted for in the analysis in order to maintain the nominal test size. However, available statistical strategies that ensure valid statistical inference may lack power, whereas more powerful methods may have the potential for inflated test sizes. Therefore, we explore an alternative approach to the analysis of very small samples of Gaussian repeated measurements, with the goal of maintaining valid inference while also improving statistical power relative to other valid methods. This approach uses generalized estimating equations with a bias-corrected empirical covariance matrix that accounts for all small-sample aspects of nuisance correlation parameter estimation in order to maintain valid inference. Furthermore, the approach utilizes correlation selection strategies with the goal of choosing the working structure that will result in the greatest power. In our study, we show that when accurate modeling of the nuisance correlation structure impacts the efficiency of regression parameter estimation, this method can improve power relative to existing methods that yield valid inference. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Heterogeneity in small aliquots of Apolllo 15 olivine-normative basalt: Implications for breccia clast studies

    NASA Astrophysics Data System (ADS)

    Lindstrom, Marilyn M.; Shervais, John W.; Vetter, Scott K.

    1993-05-01

    Most of the recent advances in lunar petrology are the direct result of breccia pull-apart studies, which have identified a wide array of new highland and mare basalt rock types that occur only as clasts within the breccias. These rocks show that the lunar crust is far more complex than suspected previously, and that processes such as magma mixing and wall-rock assimilation were important in its petrogenesis. These studies are based on the implicit assumption that the breccia clasts, which range in size from a few mm to several cm across, are representative of the parent rock from which they were derived. In many cases, the aliquot allocated for analysis may be only a few grain diameters across. While this problem is most acute for coarse-grained highland rocks, it can also cause considerable uncertainty in the analysis of mare basalt clasts. Similar problems arise with small aliquots of individual hand samples. Our study of sample heterogeneity in 9 samples of Apollo 15 olivine normative basalt (ONB) which exhibit a range in average grain size from coarse to fine are reported. Seven of these samples have not been analyzed previously, one has been analyzed by INAA only, and one has been analyzed by XRF+INAA. Our goal is to assess the effects of small aliquot size on the bulk chemistry of large mare basalt samples, and to extend this assessment to analyses of small breccia clasts.

  11. Heterogeneity in small aliquots of Apolllo 15 olivine-normative basalt: Implications for breccia clast studies

    NASA Technical Reports Server (NTRS)

    Lindstrom, Marilyn M.; Shervais, John W.; Vetter, Scott K.

    1993-01-01

    Most of the recent advances in lunar petrology are the direct result of breccia pull-apart studies, which have identified a wide array of new highland and mare basalt rock types that occur only as clasts within the breccias. These rocks show that the lunar crust is far more complex than suspected previously, and that processes such as magma mixing and wall-rock assimilation were important in its petrogenesis. These studies are based on the implicit assumption that the breccia clasts, which range in size from a few mm to several cm across, are representative of the parent rock from which they were derived. In many cases, the aliquot allocated for analysis may be only a few grain diameters across. While this problem is most acute for coarse-grained highland rocks, it can also cause considerable uncertainty in the analysis of mare basalt clasts. Similar problems arise with small aliquots of individual hand samples. Our study of sample heterogeneity in 9 samples of Apollo 15 olivine normative basalt (ONB) which exhibit a range in average grain size from coarse to fine are reported. Seven of these samples have not been analyzed previously, one has been analyzed by INAA only, and one has been analyzed by XRF+INAA. Our goal is to assess the effects of small aliquot size on the bulk chemistry of large mare basalt samples, and to extend this assessment to analyses of small breccia clasts.

  12. Small Sample Performance of Bias-corrected Sandwich Estimators for Cluster-Randomized Trials with Binary Outcomes

    PubMed Central

    Li, Peng; Redden, David T.

    2014-01-01

    SUMMARY The sandwich estimator in generalized estimating equations (GEE) approach underestimates the true variance in small samples and consequently results in inflated type I error rates in hypothesis testing. This fact limits the application of the GEE in cluster-randomized trials (CRTs) with few clusters. Under various CRT scenarios with correlated binary outcomes, we evaluate the small sample properties of the GEE Wald tests using bias-corrected sandwich estimators. Our results suggest that the GEE Wald z test should be avoided in the analyses of CRTs with few clusters even when bias-corrected sandwich estimators are used. With t-distribution approximation, the Kauermann and Carroll (KC)-correction can keep the test size to nominal levels even when the number of clusters is as low as 10, and is robust to the moderate variation of the cluster sizes. However, in cases with large variations in cluster sizes, the Fay and Graubard (FG)-correction should be used instead. Furthermore, we derive a formula to calculate the power and minimum total number of clusters one needs using the t test and KC-correction for the CRTs with binary outcomes. The power levels as predicted by the proposed formula agree well with the empirical powers from the simulations. The proposed methods are illustrated using real CRT data. We conclude that with appropriate control of type I error rates under small sample sizes, we recommend the use of GEE approach in CRTs with binary outcomes due to fewer assumptions and robustness to the misspecification of the covariance structure. PMID:25345738

  13. Relationships fade with time: a meta-analysis of temporal trends in publication in ecology and evolution.

    PubMed Central

    Jennions, Michael D; Møller, Anders P

    2002-01-01

    Both significant positive and negative relationships between the magnitude of research findings (their 'effect size') and their year of publication have been reported in a few areas of biology. These trends have been attributed to Kuhnian paradigm shifts, scientific fads and bias in the choice of study systems. Here we test whether or not these isolated cases reflect a more general trend. We examined the relationship using effect sizes extracted from 44 peer-reviewed meta-analyses covering a wide range of topics in ecological and evolutionary biology. On average, there was a small but significant decline in effect size with year of publication. For the original empirical studies there was also a significant decrease in effect size as sample size increased. However, the effect of year of publication remained even after we controlled for sampling effort. Although these results have several possible explanations, it is suggested that a publication bias against non-significant or weaker findings offers the most parsimonious explanation. As in the medical sciences, non-significant results may take longer to publish and studies with both small sample sizes and non-significant results may be less likely to be published. PMID:11788035

  14. Small benthic size classes along the N.W. European Continental Margin: spatial and temporal variability in activity and biomass

    NASA Astrophysics Data System (ADS)

    Pfannkuche, O.; Soltwedel, T.

    1998-12-01

    In the context of the European OMEX Programme this investigation focused on gradients in the biomass and activity of the small benthic size spectrum along a transect across the Goban Spur from the outer Celtic Sea into Porcupine Abyssal Plain. The effects of food pulses (seasonal, episodic) on this part of the benthic size spectrum were investigated. Sediments sampled during eight expeditions at different seasons covering a range from 200 m to 4800 m water depth were assayed with biochemical bulk measurements: determinations of chloroplastic pigment equivalents (CPE), the sum of chlorophyll a and its breakdown products, provide information concerning the input of phytodetrital matter to the seafloor; phospholipids were analyzed to estimate the total biomass of small benthic organisms (including bacteria, fungi, flagellata, protozoa and small metazoan meiofauna). A new term `small size class biomass' (SSCB) is introduced for the biomass of the smallest size classes of sediment-inhabiting organisms; the reduction of fluorescein-di-acetate (FDA) was determined to evaluate the potential activity of ester-cleaving bacterial exoenzymes in the sediment samples. At all stations benthic biomass was predominantly composed of the small size spectrum (90% on the shelf; 97-98% in the bathyal and abyssal parts of the transect). Small size class biomass (integrated over a 10 cm sediment column) ranged from 8 g C m -2 on the shelf to 2.1 g C m -2 on the adjacent Porcupine Abyssal Plain, exponentially decreasing with increasing water depth. However, a correlation between water depth and SSCB, macrofauna biomass as well as metazoan meiofauna biomass exhibited a significantly flatter slope for the small size classes in comparison to the larger organisms. CPE values indicated a pronounced seasonal cycle on the shelf and upper slope with twin peaks of phytodetrital deposition in mid spring and late summer. The deeper stations seem to receive a single annual flux maximum in late summer. SSCB and heterotrophic activity are significantly correlated to the amount of sediment-bound pigments. Seasonality in pigment concentrations is clearly followed by SSCB and activity. In contrast to macro- and megafauna which integrate over larger periods (months/years), the small benthic size classes, namely bacteria and foraminifera, proved to be the most reactive potential of the benthic communities to any perturbations on short time scales (days/weeks). The small size classes, therefore, occupy a key role in early diagenetic processes.

  15. Effect on the grain size of single-mode microwave sintered NiCuZn ferrite and zinc titanate dielectric resonator ceramics.

    PubMed

    Sirugudu, Roopas Kiran; Vemuri, Rama Krishna Murthy; Venkatachalam, Subramanian; Gopalakrishnan, Anisha; Budaraju, Srinivasa Murty

    2011-01-01

    Microwave sintering of materials significantly depends on dielectric, magnetic and conductive Losses. Samples with high dielectric and magnetic loss such as ferrites could be sintered easily. But low dielectric loss material such as dielectric resonators (paraelectrics) finds difficulty in generation of heat during microwave interaction. Microwave sintering of materials of these two classes helps in understanding the variation in dielectric and magnetic characteristics with respect to the change in grain size. High-energy ball milled Ni0.6Cu0.2Zn0.2Fe1.98O4-delta and ZnTiO3 are sintered in conventional and microwave methods and characterized for respective dielectric and magnetic characteristics. The grain size variation with higher copper content is also observed with conventional and microwave sintering. The grain size in microwave sintered Ni0.6Cu0.2Zn0.2Fe1.98O4-delta is found to be much small and uniform in comparison with conventional sintered sample. However, the grain size of microwave sintered sample is almost equal to that of conventional sintered sample of Ni0.3Cu0.5Zn0.2Fe1.98O4-delta. In contrast to these high dielectric and magnetic loss ferrites, the paraelectric materials are observed to sinter in presence of microwaves. Although microwave sintered zinc titanate sample showed finer and uniform grains with respect to conventional samples, the dielectric characteristics of microwave sintered sample are found to be less than that of conventional sample. Low dielectric constant is attributed to the low density. Smaller grain size is found to be responsible for low quality factor and the presence of small percentage of TiO2 is observed to achieve the temperature stable resonant frequency.

  16. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications

    PubMed Central

    Chaibub Neto, Elias

    2015-01-01

    In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson’s sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling. PMID:26125965

  17. Comparative study of soft thermal printing and lamination of dry thick photoresist films for the uniform fabrication of polymer MOEMS on small-sized samples

    NASA Astrophysics Data System (ADS)

    Abada, S.; Salvi, L.; Courson, R.; Daran, E.; Reig, B.; Doucet, J. B.; Camps, T.; Bardinal, V.

    2017-05-01

    A method called ‘soft thermal printing’ (STP) was developed to ensure the optimal transfer of 50 µm-thick dry epoxy resist films (DF-1050) on small-sized samples. The aim was the uniform fabrication of high aspect ratio polymer-based MOEMS (micro-optical-electrical-mechanical system) on small and/or fragile samples, such as GaAs. The printing conditions were optimized, and the resulting thickness uniformity profiles were compared to those obtained via lamination and SU-8 standard spin-coating. Under the best conditions tested, STP and lamination produced similar results, with a maximum deviation to the central thickness of 3% along the sample surface, compared to greater than 40% for SU-8 spin-coating. Both methods were successfully applied to the collective fabrication of DF1050-based MOEMS designed for the dynamic focusing of VCSELs (vertical-cavity surface-emitting lasers). Similar, efficient electro-thermo-mechanical behaviour was obtained in both cases.

  18. Relationships between media use, body fatness and physical activity in children and youth: a meta-analysis.

    PubMed

    Marshall, S J; Biddle, S J H; Gorely, T; Cameron, N; Murdey, I

    2004-10-01

    To review the empirical evidence of associations between television (TV) viewing, video/computer game use and (a) body fatness, and (b) physical activity. Meta-analysis. Published English-language studies were located from computerized literature searches, bibliographies of primary studies and narrative reviews, and manual searches of personal archives. Included studies presented at least one empirical association between TV viewing, video/computer game use and body fatness or physical activity among samples of children and youth aged 3-18 y. The mean sample-weighted corrected effect size (Pearson r). Based on data from 52 independent samples, the mean sample-weighted effect size between TV viewing and body fatness was 0.066 (95% CI=0.056-0.078; total N=44,707). The sample-weighted fully corrected effect size was 0.084. Based on data from six independent samples, the mean sample-weighted effect size between video/computer game use and body fatness was 0.070 (95% CI=-0.048 to 0.188; total N=1,722). The sample-weighted fully corrected effect size was 0.128. Based on data from 39 independent samples, the mean sample-weighted effect size between TV viewing and physical activity was -0.096 (95% CI=-0.080 to -0.112; total N=141,505). The sample-weighted fully corrected effect size was -0.129. Based on data from 10 independent samples, the mean sample-weighted effect size between video/computer game use and physical activity was -0.104 (95% CI=-0.080 to -0.128; total N=119,942). The sample-weighted fully corrected effect size was -0.141. A statistically significant relationship exists between TV viewing and body fatness among children and youth although it is likely to be too small to be of substantial clinical relevance. The relationship between TV viewing and physical activity is small but negative. The strength of these relationships remains virtually unchanged even after correcting for common sources of bias known to impact study outcomes. While the total amount of time per day engaged in sedentary behavior is inevitably prohibitive of physical activity, media-based inactivity may be unfairly implicated in recent epidemiologic trends of overweight and obesity among children and youth. Relationships between sedentary behavior and health are unlikely to be explained using single markers of inactivity, such as TV viewing or video/computer game use.

  19. Violation of the Sphericity Assumption and Its Effect on Type-I Error Rates in Repeated Measures ANOVA and Multi-Level Linear Models (MLM).

    PubMed

    Haverkamp, Nicolas; Beauducel, André

    2017-01-01

    We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes ( n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The proportionality of bias and number of measurement occasions should be considered when MLM-UN is used. The good news is that this proportionality can be compensated by means of large sample sizes. Accordingly, MLM-UN can be recommended even for small sample sizes for about three measurement occasions and for large sample sizes for about nine measurement occasions.

  20. An analysis of Apollo lunar soil samples 12070,889, 12030,187, and 12070,891: Basaltic diversity at the Apollo 12 landing site and implications for classification of small-sized lunar samples

    NASA Astrophysics Data System (ADS)

    Alexander, Louise; Snape, Joshua F.; Joy, Katherine H.; Downes, Hilary; Crawford, Ian A.

    2016-09-01

    Lunar mare basalts provide insights into the compositional diversity of the Moon's interior. Basalt fragments from the lunar regolith can potentially sample lava flows from regions of the Moon not previously visited, thus, increasing our understanding of lunar geological evolution. As part of a study of basaltic diversity at the Apollo 12 landing site, detailed petrological and geochemical data are provided here for 13 basaltic chips. In addition to bulk chemistry, we have analyzed the major, minor, and trace element chemistry of mineral phases which highlight differences between basalt groups. Where samples contain olivine, the equilibrium parent melt magnesium number (Mg#; atomic Mg/[Mg + Fe]) can be calculated to estimate parent melt composition. Ilmenite and plagioclase chemistry can also determine differences between basalt groups. We conclude that samples of approximately 1-2 mm in size can be categorized provided that appropriate mineral phases (olivine, plagioclase, and ilmenite) are present. Where samples are fine-grained (grain size <0.3 mm), a "paired samples t-test" can provide a statistical comparison between a particular sample and known lunar basalts. Of the fragments analyzed here, three are found to belong to each of the previously identified olivine and ilmenite basalt suites, four to the pigeonite basalt suite, one is an olivine cumulate, and two could not be categorized because of their coarse grain sizes and lack of appropriate mineral phases. Our approach introduces methods that can be used to investigate small sample sizes (i.e., fines) from future sample return missions to investigate lava flow diversity and petrological significance.

  1. Invited Review Small is beautiful: The analysis of nanogram-sized astromaterials

    NASA Astrophysics Data System (ADS)

    Zolensky, M. E.; Pieters, C.; Clark, B.; Papike, J. J.

    2000-01-01

    The capability of modern methods to characterize ultra-small samples is well established from analysis of interplanetary dust particles (IDPs), interstellar grains recovered from meteorites, and other materials requiring ultra-sensitive analytical capabilities. Powerful analytical techniques are available that require, under favorable circumstances, single particles of only a few nanograms for entire suites of fairly comprehensive characterizations. A returned sample of >1,000 particles with total mass of just one microgram permits comprehensive quantitative geochemical measurements that are impractical to carry out in situ by flight instruments. The main goal of this paper is to describe the state-of-the-art in microanalysis of astromaterials. Given that we can analyze fantastically small quantities of asteroids and comets, etc., we have to ask ourselves how representative are microscopic samples of bodies that measure a few to many km across? With the Galileo flybys of Gaspra and Ida, it is now recognized that even very small airless bodies have indeed developed a particulate regolith. Acquiring a sample of the bulk regolith, a simple sampling strategy, provides two critical pieces of information about the body. Regolith samples are excellent bulk samples since they normally contain all the key components of the local environment, albeit in particulate form. Furthermore, since this fine fraction dominates remote measurements, regolith samples also provide information about surface alteration processes and are a key link to remote sensing of other bodies. Studies indicate that a statistically significant number of nanogram-sized particles should be able to characterize the regolith of a primitive asteroid, although the presence of larger components within even primitive meteorites (e.g.. Murchison), e.g. chondrules, CAI, large crystal fragments, etc., points out the limitations of using data obtained from nanogram-sized samples to characterize entire primitive asteroids. However, most important asteroidal geological processes have left their mark on the matrix, since this is the finest-grained portion and therefore most sensitive to chemical and physical changes. Thus, the following information can be learned from this fine grain size fraction alone: (1) mineral paragenesis; (2) regolith processes, (3) bulk composition; (4) conditions of thermal and aqueous alteration (if any); (5) relationships to planets, comets, meteorites (via isotopic analyses, including oxygen; (6) abundance of water and hydrated material; (7) abundance of organics; (8) history of volatile mobility, (9) presence and origin of presolar and/or interstellar material. Most of this information can even be obtained from dust samples from bodies for which nanogram-sized samples are not truly representative. Future advances in sensitivity and accuracy of laboratory analytical techniques can be expected to enhance the science value of nano- to microgram sized samples even further. This highlights a key advantage of sample returns - that the most advanced analysis techniques can always be applied in the laboratory, and that well-preserved samples are available for future investigations.

  2. Particle size analysis of sediments, soils and related particulate materials for forensic purposes using laser granulometry.

    PubMed

    Pye, Kenneth; Blott, Simon J

    2004-08-11

    Particle size is a fundamental property of any sediment, soil or dust deposit which can provide important clues to nature and provenance. For forensic work, the particle size distribution of sometimes very small samples requires precise determination using a rapid and reliable method with a high resolution. The Coulter trade mark LS230 laser granulometer offers rapid and accurate sizing of particles in the range 0.04-2000 microm for a variety of sample types, including soils, unconsolidated sediments, dusts, powders and other particulate materials. Reliable results are possible for sample weights of just 50 mg. Discrimination between samples is performed on the basis of the shape of the particle size curves and statistical measures of the size distributions. In routine forensic work laser granulometry data can rarely be used in isolation and should be considered in combination with results from other techniques to reach an overall conclusion.

  3. Got Power? A Systematic Review of Sample Size Adequacy in Health Professions Education Research

    ERIC Educational Resources Information Center

    Cook, David A.; Hatala, Rose

    2015-01-01

    Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011,…

  4. A Naturalistic Study of Driving Behavior in Older Adults and Preclinical Alzheimer Disease.

    PubMed

    Babulal, Ganesh M; Stout, Sarah H; Benzinger, Tammie L S; Ott, Brian R; Carr, David B; Webb, Mollie; Traub, Cindy M; Addison, Aaron; Morris, John C; Warren, David K; Roe, Catherine M

    2017-01-01

    A clinical consequence of symptomatic Alzheimer's disease (AD) is impaired driving performance. However, decline in driving performance may begin in the preclinical stage of AD. We used a naturalistic driving methodology to examine differences in driving behavior over one year in a small sample of cognitively normal older adults with ( n = 10) and without ( n = 10) preclinical AD. As expected with a small sample size, there were no statistically significant differences between the two groups, but older adults with preclinical AD drove less often, were less likely to drive at night, and had fewer aggressive behaviors such as hard braking, speeding, and sudden acceleration. The sample size required to power a larger study to determine differences was calculated.

  5. Estimation of the bottleneck size in Florida panthers

    USGS Publications Warehouse

    Culver, M.; Hedrick, P.W.; Murphy, K.; O'Brien, S.; Hornocker, M.G.

    2008-01-01

    We have estimated the extent of genetic variation in museum (1890s) and contemporary (1980s) samples of Florida panthers Puma concolor coryi for both nuclear loci and mtDNA. The microsatellite heterozygosity in the contemporary sample was only 0.325 that in the museum samples although our sample size and number of loci are limited. Support for this estimate is provided by a sample of 84 microsatellite loci in contemporary Florida panthers and Idaho pumas Puma concolor hippolestes in which the contemporary Florida panther sample had only 0.442 the heterozygosity of Idaho pumas. The estimated diversities in mtDNA in the museum and contemporary samples were 0.600 and 0.000, respectively. Using a population genetics approach, we have estimated that to reduce either the microsatellite heterozygosity or the mtDNA diversity this much (in a period of c. 80years during the 20th century when the numbers were thought to be low) that a very small bottleneck size of c. 2 for several generations and a small effective population size in other generations is necessary. Using demographic data from Yellowstone pumas, we estimated the ratio of effective to census population size to be 0.315. Using this ratio, the census population size in the Florida panthers necessary to explain the loss of microsatellite variation was c .41 for the non-bottleneck generations and 6.2 for the two bottleneck generations. These low bottleneck population sizes and the concomitant reduced effectiveness of selection are probably responsible for the high frequency of several detrimental traits in Florida panthers, namely undescended testicles and poor sperm quality. The recent intensive monitoring both before and after the introduction of Texas pumas in 1995 will make the recovery and genetic restoration of Florida panthers a classic study of an endangered species. Our estimates of the bottleneck size responsible for the loss of genetic variation in the Florida panther completes an unknown aspect of this account. ?? 2008 The Authors. Journal compilation ?? 2008 The Zoological Society of London.

  6. Detecting spatial structures in throughfall data: the effect of extent, sample size, sampling design, and variogram estimation method

    NASA Astrophysics Data System (ADS)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-04-01

    In the last three decades, an increasing number of studies analyzed spatial patterns in throughfall to investigate the consequences of rainfall redistribution for biogeochemical and hydrological processes in forests. In the majority of cases, variograms were used to characterize the spatial properties of the throughfall data. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and an appropriate layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation methods on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with heavy outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling), and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the numbers recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes << 200, our current knowledge about throughfall spatial variability stands on shaky ground.

  7. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    NASA Astrophysics Data System (ADS)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes ≪200, currently available data are prone to large uncertainties.

  8. A simple device to convert a small-animal PET scanner into a multi-sample tissue and injection syringe counter.

    PubMed

    Green, Michael V; Seidel, Jurgen; Choyke, Peter L; Jagoda, Elaine M

    2017-10-01

    We describe a simple fixture that can be added to the imaging bed of a small-animal PET scanner that allows for automated counting of multiple organ or tissue samples from mouse-sized animals and counting of injection syringes prior to administration of the radiotracer. The combination of imaging and counting capabilities in the same machine offers advantages in certain experimental settings. A polyethylene block of plastic, sculpted to mate with the animal imaging bed of a small-animal PET scanner, is machined to receive twelve 5-ml containers, each capable of holding an entire organ from a mouse-sized animal. In addition, a triangular cross-section slot is machined down the centerline of the block to secure injection syringes from 1-ml to 3-ml in size. The sample holder is scanned in PET whole-body mode to image all samples or in one bed position to image a filled injection syringe. Total radioactivity in each sample or syringe is determined from the reconstructed images of these objects using volume re-projection of the coronal images and a single region-of-interest for each. We tested the accuracy of this method by comparing PET estimates of sample and syringe activity with well counter and dose calibrator estimates of these same activities. PET and well counting of the same samples gave near identical results (in MBq, R 2 =0.99, slope=0.99, intercept=0.00-MBq). PET syringe and dose calibrator measurements of syringe activity in MBq were also similar (R 2 =0.99, slope=0.99, intercept=- 0.22-MBq). A small-animal PET scanner can be easily converted into a multi-sample and syringe counting device by the addition of a sample block constructed for that purpose. This capability, combined with live animal imaging, can improve efficiency and flexibility in certain experimental settings. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. The Effect of Small Sample Size on Measurement Equivalence of Psychometric Questionnaires in MIMIC Model: A Simulation Study.

    PubMed

    Jamali, Jamshid; Ayatollahi, Seyyed Mohammad Taghi; Jafari, Peyman

    2017-01-01

    Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.

  10. The Effect of Small Sample Size on Measurement Equivalence of Psychometric Questionnaires in MIMIC Model: A Simulation Study

    PubMed Central

    Jafari, Peyman

    2017-01-01

    Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small. PMID:28713828

  11. The kilometer-sized Main Belt asteroid population revealed by Spitzer

    NASA Astrophysics Data System (ADS)

    Ryan, E. L.; Mizuno, D. R.; Shenoy, S. S.; Woodward, C. E.; Carey, S. J.; Noriega-Crespo, A.; Kraemer, K. E.; Price, S. D.

    2015-06-01

    Aims: Multi-epoch Spitzer Space Telescope 24 μm data is utilized from the MIPSGAL and Taurus Legacy surveys to detect asteroids based on their relative motion. Methods: Infrared detections are matched to known asteroids and average diameters and albedos are derived using the near Earth asteroid thermal model (NEATM) for 1865 asteroids ranging in size from 0.2 to 169 km. A small subsample of these objects was also detected by IRAS or MSX and the single wavelength albedo and diameter fits derived from these data are within the uncertainties of the IRAS and/or MSX derived albedos and diameters and available occultation diameters, which demonstrates the robustness of our technique. Results: The mean geometric albedo of the small Main Belt asteroids in this sample is pV = 0.134 with a sample standard deviation of 0.106. The albedo distribution of this sample is far more diverse than the IRAS or MSX samples. The cumulative size-frequency distribution of asteroids in the Main Belt at small diameters is directly derived and a 3σ deviation from the fitted size-frequency distribution slope is found near 8 km. Completeness limits of the optical and infrared surveys are discussed. Tables 1-3 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/578/A42

  12. Effect of Small Numbers of Test Results on Accuracy of Hoek-Brown Strength Parameter Estimations: A Statistical Simulation Study

    NASA Astrophysics Data System (ADS)

    Bozorgzadeh, Nezam; Yanagimura, Yoko; Harrison, John P.

    2017-12-01

    The Hoek-Brown empirical strength criterion for intact rock is widely used as the basis for estimating the strength of rock masses. Estimations of the intact rock H-B parameters, namely the empirical constant m and the uniaxial compressive strength σc, are commonly obtained by fitting the criterion to triaxial strength data sets of small sample size. This paper investigates how such small sample sizes affect the uncertainty associated with the H-B parameter estimations. We use Monte Carlo (MC) simulation to generate data sets of different sizes and different combinations of H-B parameters, and then investigate the uncertainty in H-B parameters estimated from these limited data sets. We show that the uncertainties depend not only on the level of variability but also on the particular combination of parameters being investigated. As particular combinations of H-B parameters can informally be considered to represent specific rock types, we discuss that as the minimum number of required samples depends on rock type it should correspond to some acceptable level of uncertainty in the estimations. Also, a comparison of the results from our analysis with actual rock strength data shows that the probability of obtaining reliable strength parameter estimations using small samples may be very low. We further discuss the impact of this on ongoing implementation of reliability-based design protocols and conclude with suggestions for improvements in this respect.

  13. On the role of dimensionality and sample size for unstructured and structured covariance matrix estimation

    NASA Technical Reports Server (NTRS)

    Morgera, S. D.; Cooper, D. B.

    1976-01-01

    The experimental observation that a surprisingly small sample size vis-a-vis dimension is needed to achieve good signal-to-interference ratio (SIR) performance with an adaptive predetection filter is explained. The adaptive filter requires estimates as obtained by a recursive stochastic algorithm of the inverse of the filter input data covariance matrix. The SIR performance with sample size is compared for the situations where the covariance matrix estimates are of unstructured (generalized) form and of structured (finite Toeplitz) form; the latter case is consistent with weak stationarity of the input data stochastic process.

  14. Researchers’ Intuitions About Power in Psychological Research

    PubMed Central

    Bakker, Marjan; Hartgerink, Chris H. J.; Wicherts, Jelte M.; van der Maas, Han L. J.

    2016-01-01

    Many psychology studies are statistically underpowered. In part, this may be because many researchers rely on intuition, rules of thumb, and prior practice (along with practical considerations) to determine the number of subjects to test. In Study 1, we surveyed 291 published research psychologists and found large discrepancies between their reports of their preferred amount of power and the actual power of their studies (calculated from their reported typical cell size, typical effect size, and acceptable alpha). Furthermore, in Study 2, 89% of the 214 respondents overestimated the power of specific research designs with a small expected effect size, and 95% underestimated the sample size needed to obtain .80 power for detecting a small effect. Neither researchers’ experience nor their knowledge predicted the bias in their self-reported power intuitions. Because many respondents reported that they based their sample sizes on rules of thumb or common practice in the field, we recommend that researchers conduct and report formal power analyses for their studies. PMID:27354203

  15. Researchers' Intuitions About Power in Psychological Research.

    PubMed

    Bakker, Marjan; Hartgerink, Chris H J; Wicherts, Jelte M; van der Maas, Han L J

    2016-08-01

    Many psychology studies are statistically underpowered. In part, this may be because many researchers rely on intuition, rules of thumb, and prior practice (along with practical considerations) to determine the number of subjects to test. In Study 1, we surveyed 291 published research psychologists and found large discrepancies between their reports of their preferred amount of power and the actual power of their studies (calculated from their reported typical cell size, typical effect size, and acceptable alpha). Furthermore, in Study 2, 89% of the 214 respondents overestimated the power of specific research designs with a small expected effect size, and 95% underestimated the sample size needed to obtain .80 power for detecting a small effect. Neither researchers' experience nor their knowledge predicted the bias in their self-reported power intuitions. Because many respondents reported that they based their sample sizes on rules of thumb or common practice in the field, we recommend that researchers conduct and report formal power analyses for their studies. © The Author(s) 2016.

  16. A novel approach for small sample size family-based association studies: sequential tests.

    PubMed

    Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan

    2011-08-01

    In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.

  17. Synthesis And Characterization Of Reduced Size Ferrite Reinforced Polymer Composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borah, Subasit; Bhattacharyya, Nidhi S.

    2008-04-24

    Small sized Co{sub 1-x}Ni{sub x}Fe{sub 2}O{sub 4} ferrite particles are synthesized by chemical route. The precursor materials are annealed at 400, 600 and 800 C. The crystallographic structure and phases of the samples are characterized by X-ray diffraction (XRD). The annealed ferrite samples crystallized into cubic spinel structure. Transmission Electron Microscopy (TEM) micrographs show that the average particle size of the samples are <20 nm. Particulate magneto-polymer composite materials are fabricated by reinforcing low density polyethylene (LDPE) matrix with the ferrite samples. The B-H loop study conducted at 10 kHz on the toroid shaped composite samples shows reduction in magneticmore » losses with decrease in size of the filler sample. Magnetic losses are detrimental for applications of ferrite at high powers. The reduction in magnetic loss shows a possible application of Co-Ni ferrites at high microwave power levels.« less

  18. Degradation resistance of 3Y-TZP ceramics sintered using spark plasma sintering

    NASA Astrophysics Data System (ADS)

    Chintapalli, R.; Marro, F. G.; Valle, J. A.; Yan, H.; Reece, M. J.; Anglada, M.

    2009-09-01

    Commercially available tetragonal zirconia powder doped with 3 mol% of yttria has been sintered using spark plasma sintering (SPS) and has been investigated for its resistance to hydrothermal degradation. Samples were sintered at 1100, 1150, 1175 and 1600 °C at constant pressure of 100 MPa and soaking for 5 minutes, and the grain sizes obtained were 65, 90, 120 and 800 nm, respectively. Samples sintered conventionally with a grain size of 300 nm were also compared with samples sintered using SPS. Finely polished samples were subjected to artificial degradation at 131 °C for 60 hours in vapour in auto clave under a pressure of 2 bars. The XRD studies show no phase transformation in samples with low density and small grain size (<200 nm), but significant phase transformation is seen in dense samples with larger grain size (>300 nm). Results are discussed in terms of present theories of hydrothermal degradation.

  19. Pituitary gland volumes in bipolar disorder.

    PubMed

    Clark, Ian A; Mackay, Clare E; Goodwin, Guy M

    2014-12-01

    Bipolar disorder has been associated with increased Hypothalamic-Pituitary-Adrenal axis function. The mechanism is not well understood, but there may be associated increases in pituitary gland volume (PGV) and these small increases may be functionally significant. However, research investigating PGV in bipolar disorder reports mixed results. The aim of the current study was twofold. First, to assess PGV in two novel samples of patients with bipolar disorder and matched healthy controls. Second, to perform a meta-analysis comparing PGV across a larger sample of patients and matched controls. Sample 1 consisted of 23 established patients and 32 matched controls. Sample 2 consisted of 39 medication-naïve patients and 42 matched controls. PGV was measured on structural MRI scans. Seven further studies were identified comparing PGV between patients and matched controls (total n; 244 patients, 308 controls). Both novel samples showed a small (approximately 20mm(3) or 4%), but non-significant, increase in PGV in patients. Combining the two novel samples showed a significant association of age and PGV. Meta-analysis showed a trend towards a larger pituitary gland in patients (effect size: .23, CI: -.14, .59). While results suggest a possible small difference in pituitary gland volume between patients and matched controls, larger mega-analyses with sample sizes greater even than those used in the current meta-analysis are still required. There is a small but potentially functionally significant increase in PGV in patients with bipolar disorder compared to controls. Results demonstrate the difficulty of finding potentially important but small effects in functional brain disorders. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Effect of Sampling Plans on the Risk of Escherichia coli O157 Illness.

    PubMed

    Kiermeier, Andreas; Sumner, John; Jenson, Ian

    2015-07-01

    Australia exports about 150,000 to 200,000 tons of manufacturing beef to the United States annually. Each lot is tested for Escherichia coli O157 using the N-60 sampling protocol, where 60 small pieces of surface meat from each lot of production are tested. A risk assessment of E. coli O157 illness from the consumption of hamburgers made from Australian manufacturing meat formed the basis to evaluate the effect of sample size and amount on the number of illnesses predicted. The sampling plans evaluated included no sampling (resulting in an estimated 55.2 illnesses per annum), the current N-60 plan (50.2 illnesses), N-90 (49.6 illnesses), N-120 (48.4 illnesses), and a more stringent N-60 sampling plan taking five 25-g samples from each of 12 cartons (47.4 illnesses per annum). While sampling may detect some highly contaminated lots, it does not guarantee that all such lots are removed from commerce. It is concluded that increasing the sample size or sample amount from the current N-60 plan would have a very small public health effect.

  1. Influence of item distribution pattern and abundance on efficiency of benthic core sampling

    USGS Publications Warehouse

    Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.

    2014-01-01

    ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.

  2. Performance of small cluster surveys and the clustered LQAS design to estimate local-level vaccination coverage in Mali.

    PubMed

    Minetti, Andrea; Riera-Montes, Margarita; Nackers, Fabienne; Roederer, Thomas; Koudika, Marie Hortense; Sekkenes, Johanne; Taconet, Aurore; Fermon, Florence; Touré, Albouhary; Grais, Rebecca F; Checchi, Francesco

    2012-10-12

    Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes.

  3. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    PubMed

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Scale Comparability between Nonaccommodated and Accommodated Forms of a Statewide High School Assessment: Assessment Using "l[subscript z]" Person-Fit

    ERIC Educational Resources Information Center

    Seo, Dong Gi; Hao, Shiqi

    2016-01-01

    Differential item/test functioning (DIF/DTF) are routine procedures to detect item/test unfairness as an explanation for group performance difference. However, unequal sample sizes and small sample sizes have an impact on the statistical power of the DIF/DTF detection procedures. Furthermore, DIF/DTF cannot be used for two test forms without…

  5. Critical size of crystalline ZrO(2) nanoparticles synthesized in near- and supercritical water and supercritical isopropyl alcohol.

    PubMed

    Becker, Jacob; Hald, Peter; Bremholm, Martin; Pedersen, Jan S; Chevallier, Jacques; Iversen, Steen B; Iversen, Bo B

    2008-05-01

    Nanocrystalline ZrO(2) samples with narrow size distributions and mean particle sizes below 10 nm have been synthesized in a continuous flow reactor in near and supercritical water as well as supercritical isopropyl alcohol using a wide range of temperatures, pressures, concentrations and precursors. The samples were comprehensively characterized by powder X-ray diffraction (PXRD), transmission electron microscopy (TEM), and small-angle X-ray scattering (SAXS), and the influence of the synthesis parameters on the particle size, particle size distribution, shape, aggregation and crystallinity was studied. On the basis of the choice of synthesis parameters either monoclinic or tetragonal zirconia phases can be obtained. The results suggest a critical particle size of 5-6 nm for nanocrystalline monoclinic ZrO(2) under the present conditions, which is smaller than estimates reported in the literature. Thus, very small monoclinic ZrO(2) particles can be obtained using a continuous flow reactor. This is an important result with respect to improvement of the catalytic properties of nanocrystalline ZrO(2).

  6. Optimal number of features as a function of sample size for various classification rules.

    PubMed

    Hua, Jianping; Xiong, Zixiang; Lowey, James; Suh, Edward; Dougherty, Edward R

    2005-04-15

    Given the joint feature-label distribution, increasing the number of features always results in decreased classification error; however, this is not the case when a classifier is designed via a classification rule from sample data. Typically (but not always), for fixed sample size, the error of a designed classifier decreases and then increases as the number of features grows. The potential downside of using too many features is most critical for small samples, which are commonplace for gene-expression-based classifiers for phenotype discrimination. For fixed sample size and feature-label distribution, the issue is to find an optimal number of features. Since only in rare cases is there a known distribution of the error as a function of the number of features and sample size, this study employs simulation for various feature-label distributions and classification rules, and across a wide range of sample and feature-set sizes. To achieve the desired end, finding the optimal number of features as a function of sample size, it employs massively parallel computation. Seven classifiers are treated: 3-nearest-neighbor, Gaussian kernel, linear support vector machine, polynomial support vector machine, perceptron, regular histogram and linear discriminant analysis. Three Gaussian-based models are considered: linear, nonlinear and bimodal. In addition, real patient data from a large breast-cancer study is considered. To mitigate the combinatorial search for finding optimal feature sets, and to model the situation in which subsets of genes are co-regulated and correlation is internal to these subsets, we assume that the covariance matrix of the features is blocked, with each block corresponding to a group of correlated features. Altogether there are a large number of error surfaces for the many cases. These are provided in full on a companion website, which is meant to serve as resource for those working with small-sample classification. For the companion website, please visit http://public.tgen.org/tamu/ofs/ e-dougherty@ee.tamu.edu.

  7. Comparative analyses of basal rate of metabolism in mammals: data selection does matter.

    PubMed

    Genoud, Michel; Isler, Karin; Martin, Robert D

    2018-02-01

    Basal rate of metabolism (BMR) is a physiological parameter that should be measured under strictly defined experimental conditions. In comparative analyses among mammals BMR is widely used as an index of the intensity of the metabolic machinery or as a proxy for energy expenditure. Many databases with BMR values for mammals are available, but the criteria used to select metabolic data as BMR estimates have often varied and the potential effect of this variability has rarely been questioned. We provide a new, expanded BMR database reflecting compliance with standard criteria (resting, postabsorptive state; thermal neutrality; adult, non-reproductive status for females) and examine potential effects of differential selectivity on the results of comparative analyses. The database includes 1739 different entries for 817 species of mammals, compiled from the original sources. It provides information permitting assessment of the validity of each estimate and presents the value closest to a proper BMR for each entry. Using different selection criteria, several alternative data sets were extracted and used in comparative analyses of (i) the scaling of BMR to body mass and (ii) the relationship between brain mass and BMR. It was expected that results would be especially dependent on selection criteria with small sample sizes and with relatively weak relationships. Phylogenetically informed regression (phylogenetic generalized least squares, PGLS) was applied to the alternative data sets for several different clades (Mammalia, Eutheria, Metatheria, or individual orders). For Mammalia, a 'subsampling procedure' was also applied, in which random subsamples of different sample sizes were taken from each original data set and successively analysed. In each case, two data sets with identical sample size and species, but comprising BMR data with different degrees of reliability, were compared. Selection criteria had minor effects on scaling equations computed for large clades (Mammalia, Eutheria, Metatheria), although less-reliable estimates of BMR were generally about 12-20% larger than more-reliable ones. Larger effects were found with more-limited clades, such as sciuromorph rodents. For the relationship between BMR and brain mass the results of comparative analyses were found to depend strongly on the data set used, especially with more-limited, order-level clades. In fact, with small sample sizes (e.g. <100) results often appeared erratic. Subsampling revealed that sample size has a non-linear effect on the probability of a zero slope for a given relationship. Depending on the species included, results could differ dramatically, especially with small sample sizes. Overall, our findings indicate a need for due diligence when selecting BMR estimates and caution regarding results (even if seemingly significant) with small sample sizes. © 2017 Cambridge Philosophical Society.

  8. Some physical properties of Apollo 12 lunar samples

    NASA Technical Reports Server (NTRS)

    Gold, T.; Oleary, B. T.; Campbell, M.

    1971-01-01

    The size distribution of the lunar fines is measured, and small but significant differences are found between the Apollo 11 and 12 samples as well as among the Apollo 12 core samples. The observed differences in grain size distribtuion in the core samples are related to surface transportation processes, and the importance of a sedimentation process versus meteoritic impact gardening of the mare grounds is discussed. The optical and the radio frequency electrical properties are measured and are also found to differ only slightly from Apollo 11 results.

  9. Effects of growth rate, size, and light availability on tree survival across life stages: a demographic analysis accounting for missing values and small sample sizes.

    PubMed

    Moustakas, Aristides; Evans, Matthew R

    2015-02-28

    Plant survival is a key factor in forest dynamics and survival probabilities often vary across life stages. Studies specifically aimed at assessing tree survival are unusual and so data initially designed for other purposes often need to be used; such data are more likely to contain errors than data collected for this specific purpose. We investigate the survival rates of ten tree species in a dataset designed to monitor growth rates. As some individuals were not included in the census at some time points we use capture-mark-recapture methods both to allow us to account for missing individuals, and to estimate relocation probabilities. Growth rates, size, and light availability were included as covariates in the model predicting survival rates. The study demonstrates that tree mortality is best described as constant between years and size-dependent at early life stages and size independent at later life stages for most species of UK hardwood. We have demonstrated that even with a twenty-year dataset it is possible to discern variability both between individuals and between species. Our work illustrates the potential utility of the method applied here for calculating plant population dynamics parameters in time replicated datasets with small sample sizes and missing individuals without any loss of sample size, and including explanatory covariates.

  10. Using known populations of pronghorn to evaluate sampling plans and estimators

    USGS Publications Warehouse

    Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.

    1995-01-01

    Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.

  11. EVALUATION OF A NEW MEAN SCALED AND MOMENT ADJUSTED TEST STATISTIC FOR SEM.

    PubMed

    Tong, Xiaoxiao; Bentler, Peter M

    2013-01-01

    Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the four test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies seven sample sizes and three distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ(2) test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra-Bentler scaled test statistic performed best overall, while the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.

  12. Using small area estimation and Lidar-derived variables for multivariate prediction of forest attributes

    Treesearch

    F. Mauro; Vicente Monleon; H. Temesgen

    2015-01-01

    Small area estimation (SAE) techniques have been successfully applied in forest inventories to provide reliable estimates for domains where the sample size is small (i.e. small areas). Previous studies have explored the use of either Area Level or Unit Level Empirical Best Linear Unbiased Predictors (EBLUPs) in a univariate framework, modeling each variable of interest...

  13. Planned Missing Data Designs with Small Sample Sizes: How Small Is Too Small?

    ERIC Educational Resources Information Center

    Jia, Fan; Moore, E. Whitney G.; Kinai, Richard; Crowe, Kelly S.; Schoemann, Alexander M.; Little, Todd D.

    2014-01-01

    Utilizing planned missing data (PMD) designs (ex. 3-form surveys) enables researchers to ask participants fewer questions during the data collection process. An important question, however, is just how few participants are needed to effectively employ planned missing data designs in research studies. This article explores this question by using…

  14. An empirical analysis of the quantitative effect of data when fitting quadratic and cubic polynomials

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1974-01-01

    A study is made of the extent to which the size of the sample affects the accuracy of a quadratic or a cubic polynomial approximation of an experimentally observed quantity, and the trend with regard to improvement in the accuracy of the approximation as a function of sample size is established. The task is made possible through a simulated analysis carried out by the Monte Carlo method in which data are simulated by using several transcendental or algebraic functions as models. Contaminated data of varying amounts are fitted to either quadratic or cubic polynomials, and the behavior of the mean-squared error of the residual variance is determined as a function of sample size. Results indicate that the effect of the size of the sample is significant only for relatively small sizes and diminishes drastically for moderate and large amounts of experimental data.

  15. Simulation of Particle Size Effect on Dynamic Properties and Fracture of PTFE-W-Al Composites

    NASA Astrophysics Data System (ADS)

    Herbold, Eric; Cai, Jing; Benson, David; Nesterenko, Vitali

    2007-06-01

    Recent investigations of the dynamic compressive strength of cold isostatically pressed (CIP) composites of polytetrafluoroethylene (PTFE), tungsten and aluminum powders show significant differences depending on the size of metallic particles. PTFE and aluminum mixtures are known to be energetic under dynamic and thermal loading. The addition of tungsten increases density and overall strength of the sample. Multi-material Eulerian and arbitrary Lagrangian-Eulerian methods were used for the investigation due to the complexity of the microstructure, relatively large deformations and the ability to handle the formation of free surfaces in a natural manner. The calculations indicate that the observed dependence of sample strength on particle size is due to the formation of force chains under dynamic loading in samples with small particle sizes even at larger porosity in comparison with samples with large grain size and larger density.

  16. Ontogenetic prey size selection in snakes: predator size and functional limitations to handling minimum prey sizes.

    PubMed

    Hampton, Paul M

    2018-02-01

    As body size increases, some predators eliminate small prey from their diet exhibiting an ontogenetic shift toward larger prey. In contrast, some predators show a telescoping pattern of prey size in which both large and small prey are consumed with increasing predator size. To explore a functional explanation for the two feeding patterns, I examined feeding effort as both handling time and number of upper jaw movements during ingestion of fish of consistent size. I used a range of body sizes from two snake species that exhibit ontogenetic shifts in prey size (Nerodia fasciata and N. rhombifer) and a species that exhibits telescoping prey size with increased body size (Thamnophis proximus). For the two Nerodia species, individuals with small or large heads exhibited greater difficulty in feeding effort compared to snakes of intermediate size. However, for T. proximus measures of feeding effort were negatively correlated with head length and snout-vent length (SVL). These data indicate that ontogenetic shifters of prey size develop trophic morphology large enough that feeding effort increases for disproportionately small prey. I also compared changes in body size among the two diet strategies for active foraging snake species using data gleaned from the literature to determine if increased change in body size and thereby feeding morphology is observable in snakes regardless of prey type or foraging habitat. Of the 30 species sampled from literature, snakes that exhibit ontogenetic shifts in prey size have a greater magnitude of change in SVL than species that have telescoping prey size patterns. Based upon the results of the two data sets above, I conclude that ontogenetic shifts away from small prey occur in snakes due, in part, to growth of body size and feeding structures beyond what is efficient for handling small prey. Copyright © 2017. Published by Elsevier GmbH.

  17. Inertial impaction air sampling device

    DOEpatents

    Dewhurst, Katharine H.

    1990-01-01

    An inertial impactor to be used in an air sampling device for collection of respirable size particles in ambient air which may include a graphite furnace as the impaction substrate in a small-size, portable, direct analysis structure that gives immediate results and is totally self-contained allowing for remote and/or personal sampling. The graphite furnace collects suspended particles transported through the housing by means of the air flow system, and these particles may be analyzed for elements, quantitatively and qualitatively, by atomic absorption spectrophotometry.

  18. Inertial impaction air sampling device

    DOEpatents

    Dewhurst, K.H.

    1987-12-10

    An inertial impactor to be used in an air sampling device for collection of respirable size particles in ambient air which may include a graphite furnace as the impaction substrate in a small-size, portable, direct analysis structure that gives immediate results and is totally self-contained allowing for remote and/or personal sampling. The graphite furnace collects suspended particles transported through the housing by means of the air flow system, and these particles may be analyzed for elements, quantitatively and qualitatively, by atomic absorption spectrophotometry. 3 figs.

  19. Measurements of cloud condensation nuclei activity and droplet activation kinetics of wet processed regional dust samples and minerals

    NASA Astrophysics Data System (ADS)

    Kumar, P.; Sokolik, I. N.; Nenes, A.

    2011-04-01

    This study reports laboratory measurements of particle size distributions, cloud condensation nuclei (CCN) activity, and droplet activation kinetics of wet generated aerosols from clays, calcite, quartz, and desert soil samples from Northern Africa, East Asia/China, and Northern America. The dependence of critical supersaturation, sc, on particle dry diameter, Ddry, is used to characterize particle-water interactions and assess the ability of Frenkel-Halsey-Hill adsorption activation theory (FHH-AT) and Köhler theory (KT) to describe the CCN activity of the considered samples. Regional dust samples produce unimodal size distributions with particle sizes as small as 40 nm, CCN activation consistent with KT, and exhibit hygroscopicity similar to inorganic salts. Clays and minerals produce a bimodal size distribution; the CCN activity of the smaller mode is consistent with KT, while the larger mode is less hydrophilic, follows activation by FHH-AT, and displays almost identical CCN activity to dry generated dust. Ion Chromatography (IC) analysis performed on regional dust samples indicates a soluble fraction that cannot explain the CCN activity of dry or wet generated dust. A mass balance and hygroscopicity closure suggests that the small amount of ions (of low solubility compounds like calcite) present in the dry dust dissolve in the aqueous suspension during the wet generation process and give rise to the observed small hygroscopic mode. Overall these results identify an artifact that may question the atmospheric relevance of dust CCN activity studies using the wet generation method. Based on a threshold droplet growth analysis, wet generated mineral aerosols display similar activation kinetics compared to ammonium sulfate calibration aerosol. Finally, a unified CCN activity framework that accounts for concurrent effects of solute and adsorption is developed to describe the CCN activity of aged or hygroscopic dusts.

  20. On the influence of crystal size and wavelength on native SAD phasing.

    PubMed

    Liebschner, Dorothee; Yamada, Yusuke; Matsugaki, Naohiro; Senda, Miki; Senda, Toshiya

    2016-06-01

    Native SAD is an emerging phasing technique that uses the anomalous signal of native heavy atoms to obtain crystallographic phases. The method does not require specific sample preparation to add anomalous scatterers, as the light atoms contained in the native sample are used as marker atoms. The most abundant anomalous scatterer used for native SAD, which is present in almost all proteins, is sulfur. However, the absorption edge of sulfur is at low energy (2.472 keV = 5.016 Å), which makes it challenging to carry out native SAD phasing experiments as most synchrotron beamlines are optimized for shorter wavelength ranges where the anomalous signal of sulfur is weak; for longer wavelengths, which produce larger anomalous differences, the absorption of X-rays by the sample, solvent, loop and surrounding medium (e.g. air) increases tremendously. Therefore, a compromise has to be found between measuring strong anomalous signal and minimizing absorption. It was thus hypothesized that shorter wavelengths should be used for large crystals and longer wavelengths for small crystals, but no thorough experimental analyses have been reported to date. To study the influence of crystal size and wavelength, native SAD experiments were carried out at different wavelengths (1.9 and 2.7 Å with a helium cone; 3.0 and 3.3 Å with a helium chamber) using lysozyme and ferredoxin reductase crystals of various sizes. For the tested crystals, the results suggest that larger sample sizes do not have a detrimental effect on native SAD data and that long wavelengths give a clear advantage with small samples compared with short wavelengths. The resolution dependency of substructure determination was analyzed and showed that high-symmetry crystals with small unit cells require higher resolution for the successful placement of heavy atoms.

  1. Magnetic properties of Apollo 14 breccias and their correlation with metamorphism.

    NASA Technical Reports Server (NTRS)

    Gose, W. A.; Pearce, G. W.; Strangway, D. W.; Larson, E. E.

    1972-01-01

    The magnetic properties of Apollo 14 breccias can be explained in terms of the grain size distribution of the interstitial iron which is directly related to the metamorphic grade of the sample. In samples 14049 and 14313 iron grains less than 500 A in diameter are dominant as evidenced by a Richter-type magnetic aftereffect and hysteresis measurements. Both samples are of lowest metamorphic grade. The medium metamorphic-grade sample 14321 and the high-grade sample 14312 both show a logarithmic time-dependence of the magnetization indicative of a wide range of relaxation times and thus grain sizes, but sample 14321 contains a stable remanent magnetization whereas sample 14312 does not. This suggests that small multidomain particles (less than 1 micron) are most abundant in sample 14321 while sample 14312 is magnetically controlled by grains greater than 1 micron. The higher the metamorphic grade, the larger the grain size of the iron controlling the magnetic properties.

  2. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    PubMed

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  3. Quantal Response: Estimation and Inference

    DTIC Science & Technology

    2014-09-01

    considered. The CI-based test is just another way of looking at the Wald test. A small-sample simulation illustrates aberrant behavior of the Wald/CI...asymptotic power computation (Eq. 36) exhibits this behavior but not to such an extent as the simulated small-sample power. Sample size is n = 11 and...as |m1−m0| increases, but the power of the Wald test actually decreases for large |m1−m0| and eventually π → α . This type of behavior was reported as

  4. Directions for new developments on statistical design and analysis of small population group trials.

    PubMed

    Hilgers, Ralf-Dieter; Roes, Kit; Stallard, Nigel

    2016-06-14

    Most statistical design and analysis methods for clinical trials have been developed and evaluated where at least several hundreds of patients could be recruited. These methods may not be suitable to evaluate therapies if the sample size is unavoidably small, which is usually termed by small populations. The specific sample size cut off, where the standard methods fail, needs to be investigated. In this paper, the authors present their view on new developments for design and analysis of clinical trials in small population groups, where conventional statistical methods may be inappropriate, e.g., because of lack of power or poor adherence to asymptotic approximations due to sample size restrictions. Following the EMA/CHMP guideline on clinical trials in small populations, we consider directions for new developments in the area of statistical methodology for design and analysis of small population clinical trials. We relate the findings to the research activities of three projects, Asterix, IDeAl, and InSPiRe, which have received funding since 2013 within the FP7-HEALTH-2013-INNOVATION-1 framework of the EU. As not all aspects of the wide research area of small population clinical trials can be addressed, we focus on areas where we feel advances are needed and feasible. The general framework of the EMA/CHMP guideline on small population clinical trials stimulates a number of research areas. These serve as the basis for the three projects, Asterix, IDeAl, and InSPiRe, which use various approaches to develop new statistical methodology for design and analysis of small population clinical trials. Small population clinical trials refer to trials with a limited number of patients. Small populations may result form rare diseases or specific subtypes of more common diseases. New statistical methodology needs to be tailored to these specific situations. The main results from the three projects will constitute a useful toolbox for improved design and analysis of small population clinical trials. They address various challenges presented by the EMA/CHMP guideline as well as recent discussions about extrapolation. There is a need for involvement of the patients' perspective in the planning and conduct of small population clinical trials for a successful therapy evaluation.

  5. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    PubMed

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  6. LEVEL AND EXTENT OF MERCURY CONTAMINATION IN OREGON, USA, LOTIC FISH

    EPA Science Inventory

    Because of growing concern with widespread mercury contamination of fish tissue, we sampled 154 streams and rivers throughout Oregon using a probability design. To maximize the sample size we took samples of small and large fish, where possible, from wadeable streams and boatable...

  7. Microgravity Testing of a Surface Sampling System for Sample Return from Small Solar System Bodies

    NASA Technical Reports Server (NTRS)

    Franzen, M. A.; Preble, J.; Schoenoff, M.; Halona, K.; Long, T. E.; Park, T.; Sears, D. W. G.

    2004-01-01

    The return of samples from solar system bodies is becoming an essential element of solar system exploration. The recent National Research Council Solar System Exploration Decadal Survey identified six sample return missions as high priority missions: South-Aitken Basin Sample Return, Comet Surface Sample Return, Comet Surface Sample Return-sample from selected surface sites, Asteroid Lander/Rover/Sample Return, Comet Nucleus Sample Return-cold samples from depth, and Mars Sample Return [1] and the NASA Roadmap also includes sample return missions [2] . Sample collection methods that have been flown on robotic spacecraft to date return subgram quantities, but many scientific issues (like bulk composition, particle size distributions, petrology, chronology) require tens to hundreds of grams of sample. Many complex sample collection devices have been proposed, however, small robotic missions require simplicity. We present here the results of experiments done with a simple but innovative collection system for sample return from small solar system bodies.

  8. Experimental strategies for imaging bioparticles with femtosecond hard X-ray pulses

    DOE PAGES

    Daurer, Benedikt J.; Okamoto, Kenta; Bielecki, Johan; ...

    2017-04-07

    This study explores the capabilities of the Coherent X-ray Imaging Instrument at the Linac Coherent Light Source to image small biological samples. The weak signal from small samples puts a significant demand on the experiment. AerosolizedOmono River virusparticles of ~40 nm in diameter were injected into the submicrometre X-ray focus at a reduced pressure. Diffraction patterns were recorded on two area detectors. The statistical nature of the measurements from many individual particles provided information about the intensity profile of the X-ray beam, phase variations in the wavefront and the size distribution of the injected particles. The results point to amore » wider than expected size distribution (from ~35 to ~300 nm in diameter). This is likely to be owing to nonvolatile contaminants from larger droplets during aerosolization and droplet evaporation. The results suggest that the concentration of nonvolatile contaminants and the ratio between the volumes of the initial droplet and the sample particles is critical in such studies. The maximum beam intensity in the focus was found to be 1.9 × 10 12photons per µm 2per pulse. The full-width of the focus at half-maximum was estimated to be 500 nm (assuming 20% beamline transmission), and this width is larger than expected. Under these conditions, the diffraction signal from a sample-sized particle remained above the average background to a resolution of 4.25 nm. Finally, the results suggest that reducing the size of the initial droplets during aerosolization is necessary to bring small particles into the scope of detailed structural studies with X-ray lasers.« less

  9. Atomistic origin of size effects in fatigue behavior of metallic glasses

    NASA Astrophysics Data System (ADS)

    Sha, Zhendong; Wong, Wei Hin; Pei, Qingxiang; Branicio, Paulo Sergio; Liu, Zishun; Wang, Tiejun; Guo, Tianfu; Gao, Huajian

    2017-07-01

    While many experiments and simulations on metallic glasses (MGs) have focused on their tensile ductility under monotonic loading, the fatigue mechanisms of MGs under cyclic loading still remain largely elusive. Here we perform molecular dynamics (MD) and finite element simulations of tension-compression fatigue tests in MGs to elucidate their fatigue mechanisms with focus on the sample size effect. Shear band (SB) thickening is found to be the inherent fatigue mechanism for nanoscale MGs. The difference in fatigue mechanisms between macroscopic and nanoscale MGs originates from whether the SB forms partially or fully through the cross-section of the specimen. Furthermore, a qualitative investigation of the sample size effect suggests that small sample size increases the fatigue life while large sample size promotes cyclic softening and necking. Our observations on the size-dependent fatigue behavior can be rationalized by the Gurson model and the concept of surface tension of the nanovoids. The present study sheds light on the fatigue mechanisms of MGs and can be useful in interpreting previous experimental results.

  10. A Meta-Analysis on Antecedents and Outcomes of Detachment from Work.

    PubMed

    Wendsche, Johannes; Lohmann-Haislah, Andrea

    2016-01-01

    Detachment from work has been proposed as an important non-work experience helping employees to recover from work demands. This meta-analysis (86 publications, k = 91 independent study samples, N = 38,124 employees) examined core antecedents and outcomes of detachment in employee samples. With regard to outcomes, results indicated average positive correlations between detachment and self-reported mental (i.e., less exhaustion, higher life satisfaction, more well-being, better sleep) and physical (i.e., lower physical discomfort) health, state well-being (i.e., less fatigue, higher positive affect, more intensive state of recovery), and task performance (small to medium sized effects). However, average relationships between detachment and physiological stress indicators and work motivation were not significant while associations with contextual performance and creativity were significant, but negative. Concerning work characteristics, as expected, job demands were negatively related and job resources were positively related to detachment (small sized effects). Further, analyses revealed that person characteristics such as negative affectivity/neuroticism (small sized effect) and heavy work investment (medium sized effect) were negatively related to detachment whereas detachment and demographic variables (i.e., age and gender) were not related. Moreover, we found a medium sized average negative relationship between engagement in work-related activities during non-work time and detachment. For most of the examined relationships heterogeneity of effect sizes was moderate to high. We identified study design, samples' gender distribution, and affective valence of work-related thoughts as moderators for some of these aforementioned relationships. The results of this meta-analysis point to detachment as a non-work (recovery) experience that is influenced by work-related and personal characteristics which in turn is relevant for a range of employee outcomes.

  11. A Meta-Analysis on Antecedents and Outcomes of Detachment from Work

    PubMed Central

    Wendsche, Johannes; Lohmann-Haislah, Andrea

    2017-01-01

    Detachment from work has been proposed as an important non-work experience helping employees to recover from work demands. This meta-analysis (86 publications, k = 91 independent study samples, N = 38,124 employees) examined core antecedents and outcomes of detachment in employee samples. With regard to outcomes, results indicated average positive correlations between detachment and self-reported mental (i.e., less exhaustion, higher life satisfaction, more well-being, better sleep) and physical (i.e., lower physical discomfort) health, state well-being (i.e., less fatigue, higher positive affect, more intensive state of recovery), and task performance (small to medium sized effects). However, average relationships between detachment and physiological stress indicators and work motivation were not significant while associations with contextual performance and creativity were significant, but negative. Concerning work characteristics, as expected, job demands were negatively related and job resources were positively related to detachment (small sized effects). Further, analyses revealed that person characteristics such as negative affectivity/neuroticism (small sized effect) and heavy work investment (medium sized effect) were negatively related to detachment whereas detachment and demographic variables (i.e., age and gender) were not related. Moreover, we found a medium sized average negative relationship between engagement in work-related activities during non-work time and detachment. For most of the examined relationships heterogeneity of effect sizes was moderate to high. We identified study design, samples' gender distribution, and affective valence of work-related thoughts as moderators for some of these aforementioned relationships. The results of this meta-analysis point to detachment as a non-work (recovery) experience that is influenced by work-related and personal characteristics which in turn is relevant for a range of employee outcomes. PMID:28133454

  12. Dynamics and structure of an aging binary colloidal glass

    NASA Astrophysics Data System (ADS)

    Lynch, Jennifer M.; Cianci, Gianguido C.; Weeks, Eric R.

    2008-09-01

    We study aging in a colloidal suspension consisting of micron-sized particles in a liquid. This system is made glassy by increasing the particle concentration. We observe samples composed of particles of two sizes, with a size ratio of 1:2.1 and a volume fraction ratio 1:6, using fast laser scanning confocal microscopy. This technique yields real-time, three-dimensional movies deep inside the colloidal glass. Specifically, we look at how the size, motion, and structural organization of the particles relate to the overall aging of the glass. Particles move in spatially heterogeneous cooperative groups. These mobile regions tend to be richer in small particles, and these small particles facilitate the motion of nearby particles of both sizes.

  13. A cautionary note on Bayesian estimation of population size by removal sampling with diffuse priors.

    PubMed

    Bord, Séverine; Bioche, Christèle; Druilhet, Pierre

    2018-05-01

    We consider the problem of estimating a population size by removal sampling when the sampling rate is unknown. Bayesian methods are now widespread and allow to include prior knowledge in the analysis. However, we show that Bayes estimates based on default improper priors lead to improper posteriors or infinite estimates. Similarly, weakly informative priors give unstable estimators that are sensitive to the choice of hyperparameters. By examining the likelihood, we show that population size estimates can be stabilized by penalizing small values of the sampling rate or large value of the population size. Based on theoretical results and simulation studies, we propose some recommendations on the choice of the prior. Then, we applied our results to real datasets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Transition from Forward Smoldering to Flaming in Small Polyurethane Foam Samples

    NASA Technical Reports Server (NTRS)

    Bar-Ilan, A.; Putzeys, O.; Rein, G.; Fernandez-Pello, A. C.

    2004-01-01

    Experimental observations are presented of the effect of the flow velocity and oxygen concentration, and of a thermal radiant flux, on the transition from smoldering to flaming in forward smoldering of small samples of polyurethane foam with a gas/solid interface. The experiments are part of a project studying the transition from smolder to flaming under conditions encountered in spacecraft facilities, i.e., microgravity, low velocity variable oxygen concentration flows. Because the microgravity experiments are planned for the International Space Station, the foam samples had to be limited in size for safety and launch mass reasons. The feasible sample size is too small for smolder to self propagate because of heat losses to the surrounding environment. Thus, the smolder propagation and the transition to flaming had to be assisted by reducing the heat losses to the surroundings and increasing the oxygen concentration. The experiments are conducted with small parallelepiped samples vertically placed in a wind tunnel. Three of the sample lateral-sides are maintained at elevated temperature and the fourth side is exposed to an upward flow and to a radiant flux. It is found that decreasing the flow velocity and increasing its oxygen concentration, and/or increasing the radiant flux enhances the transition to flaming, and reduces the delay time to transition. Limiting external ambient conditions for the transition to flaming are reported for the present experimental set-up. The results show that smolder propagation and the transition to flaming can occur in relatively small fuel samples if the external conditions are appropriate. The results also indicate that transition to flaming occurs in the char left behind by the smolder reaction, and it has the characteristics of a gas-phase ignition induced by the smolder reaction, which acts as the source of both gaseous fuel and heat.

  15. The Effects of Small Sample Size on Identifying Polytomous DIF Using the Liu-Agresti Estimator of the Cumulative Common Odds Ratio

    ERIC Educational Resources Information Center

    Carvajal, Jorge; Skorupski, William P.

    2010-01-01

    This study is an evaluation of the behavior of the Liu-Agresti estimator of the cumulative common odds ratio when identifying differential item functioning (DIF) with polytomously scored test items using small samples. The Liu-Agresti estimator has been proposed by Penfield and Algina as a promising approach for the study of polytomous DIF but no…

  16. Radar Measurements of Small Debris from HUSIR and HAX

    NASA Technical Reports Server (NTRS)

    Hamilton J.; Blackwell, C.; McSheehy, R.; Juarez, Q.; Anz-Meador, P.

    2017-01-01

    For many years, the NASA Orbital Debris Program Office has been collecting measurements of the orbital debris environment from the Haystack Ultra-wideband Satellite Imaging Radar (HUSIR) and its auxiliary (HAX). These measurements sample the small debris population in low earth orbit (LEO). This paper will provide an overview of recent observations and highlight trends in selected debris populations. Using the NASA size estimation model, objects with a characteristic size of 1 cm and larger observed from HUSIR will be presented. Also, objects with a characteristic size of 2 cm and larger observed from HAX will be presented.

  17. Externally pressurized porous cylinder for multiple surface aerosol generation and method of generation

    DOEpatents

    Apel, Charles T.; Layman, Lawrence R.; Gallimore, David L.

    1988-01-01

    A nebulizer for generating aerosol having small droplet sizes and high efficiency at low sample introduction rates. The nebulizer has a cylindrical gas permeable active surface. A sleeve is disposed around the cylinder and gas is provided from the sleeve to the interior of the cylinder formed by the active surface. In operation, a liquid is provided to the inside of the gas permeable surface. The gas contacts the wetted surface and forms small bubbles which burst to form an aerosol. Those bubbles which are large are carried by momentum to another part of the cylinder where they are renebulized. This process continues until the entire sample is nebulized into aerosol sized droplets.

  18. The dependence of the CO2 removal efficiency of LiOH on humidity and mesh size. [in spacecraft life support systems

    NASA Technical Reports Server (NTRS)

    Davis, S. H.; Kissinger, L. D.

    1978-01-01

    The effect of humidity on the CO2 removal efficiency of small beds of anhydrous LiOH has been studied. Experimental data taken in this small bed system clearly show that there is an optimum humidity for beds loaded with LiOH from a single lot. The CO2 efficiency falls rapidly under dry conditions, but this behavior is approximately the same in all samples. The behavior of the bed under wet conditions is quite dependent on material size distribution. The presence of large particles in a sample can lead to rapid fall off in the CO2 efficiency as the humidity increases.

  19. Size distribution and growth rate of crystal nuclei near critical undercooling in small volumes

    NASA Astrophysics Data System (ADS)

    Kožíšek, Z.; Demo, P.

    2017-11-01

    Kinetic equations are numerically solved within standard nucleation model to determine the size distribution of nuclei in small volumes near critical undercooling. Critical undercooling, when first nuclei are detected within the system, depends on the droplet volume. The size distribution of nuclei reaches the stationary value after some time delay and decreases with nucleus size. Only a certain maximum size of nuclei is reached in small volumes near critical undercooling. As a model system, we selected recently studied nucleation in Ni droplet [J. Bokeloh et al., Phys. Rev. Let. 107 (2011) 145701] due to available experimental and simulation data. However, using these data for sample masses from 23 μg up to 63 mg (corresponding to experiments) leads to the size distribution of nuclei, when no critical nuclei in Ni droplet are formed (the number of critical nuclei < 1). If one takes into account the size dependence of the interfacial energy, the size distribution of nuclei increases to reasonable values. In lower volumes (V ≤ 10-9 m3) nucleus size reaches some maximum extreme size, which quickly increases with undercooling. Supercritical clusters continue their growth only if the number of critical nuclei is sufficiently high.

  20. Evaluation of the local homogeneity fluctuation of sinter of the small chip size MLCCs by means of mid-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Tsuzuku, Koichiro; Hagiwara, Tomoya; Takeoka, Shunsuke; Ikemoto, Yuka

    2008-05-01

    Vibration bands of dielectric ceramics appear at a mid-infrared (MIR) and those position and shape are changed owing to change environment of crystal lattice. Therefore, micro-focus MIR spectroscopy is a one of useful tool to evaluate very small size capacitor (e.g. smaller than 0.5 mm in chip size). Very small size multi-layer capacitor: MLCC are one of very important device to produce high quality electrical products such as cell phone, etc. Quality and reliability of MLCC are corresponding to not only average dielectric properties but also local fluctuation of them. Furthermore, local fluctuation of dielectric properties of MLCC could evaluate with MIR spectroscopy. It is possible to obtain a satisfied MIR spectrum from small size samples performed by a micro-focus spectrometer combined with synchrotron radiation as a high luminance light source at beam line BL43IR of SPring-8. From the above result, it is possible to evaluate the degree of homogeneity by comparing the shape change of Ti-O peak on IR spectra.

  1. Performance of small cluster surveys and the clustered LQAS design to estimate local-level vaccination coverage in Mali

    PubMed Central

    2012-01-01

    Background Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. Methods We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. Results VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Conclusions Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes. PMID:23057445

  2. Design and Weighting Methods for a Nationally Representative Sample of HIV-infected Adults Receiving Medical Care in the United States-Medical Monitoring Project

    PubMed Central

    Iachan, Ronaldo; H. Johnson, Christopher; L. Harding, Richard; Kyle, Tonja; Saavedra, Pedro; L. Frazier, Emma; Beer, Linda; L. Mattson, Christine; Skarbinski, Jacek

    2016-01-01

    Background: Health surveys of the general US population are inadequate for monitoring human immunodeficiency virus (HIV) infection because the relatively low prevalence of the disease (<0.5%) leads to small subpopulation sample sizes. Objective: To collect a nationally and locally representative probability sample of HIV-infected adults receiving medical care to monitor clinical and behavioral outcomes, supplementing the data in the National HIV Surveillance System. This paper describes the sample design and weighting methods for the Medical Monitoring Project (MMP) and provides estimates of the size and characteristics of this population. Methods: To develop a method for obtaining valid, representative estimates of the in-care population, we implemented a cross-sectional, three-stage design that sampled 23 jurisdictions, then 691 facilities, then 9,344 HIV patients receiving medical care, using probability-proportional-to-size methods. The data weighting process followed standard methods, accounting for the probabilities of selection at each stage and adjusting for nonresponse and multiplicity. Nonresponse adjustments accounted for differing response at both facility and patient levels. Multiplicity adjustments accounted for visits to more than one HIV care facility. Results: MMP used a multistage stratified probability sampling design that was approximately self-weighting in each of the 23 project areas and nationally. The probability sample represents the estimated 421,186 HIV-infected adults receiving medical care during January through April 2009. Methods were efficient (i.e., induced small, unequal weighting effects and small standard errors for a range of weighted estimates). Conclusion: The information collected through MMP allows monitoring trends in clinical and behavioral outcomes and informs resource allocation for treatment and prevention activities. PMID:27651851

  3. Stoichiometry of Cd(S,Se) nanocrystals by anomalous small-angle x-ray scattering

    NASA Astrophysics Data System (ADS)

    Ramos, Aline; Lyon, Olivier; Levelut, Claire

    1995-12-01

    In Cd(S,Se)-doped glasses the optical properties are strongly dependent on the size of the nanocrystals, but can be also largely modified by changes in the crystal stoichiometry; however, the information on both stoichiometry and size is difficult to obtain in crystals smaller than 10 nm. The intensity scattered at small angles is classically used to get information about nanoparticles sizes. Moreover the variation of amplitude of this intensity with the energy of the x ray—``the anomalous effect''—near the selenium edge is related to stoichiometry. Anomalous small-angle x-ray scattering has been used as a tentative method to get information about stoichiometry in nanocrystals with size lower than 10 nm. Experiments have been performed on samples treated for 2 days at temperatures in the range 540-650 °C. The samples treated at temperatures above 580 °C contain crystals with size larger than 4 nm. For all these samples the anomalous effect has nearly the same amplitude, and we found the stoichiometry x=0.4 for the CdSxSe1-x nanocrystals. This agrees with the previous results obtained by scanning electron microscopy and Raman spectroscopy. The results are also confirmed by measurements of the position of the optical absorption edge and by wide-angle x-ray scattering experiments. For the sample treated at 560 °C, the nanocrystal size is 3 nm and the stoichiometry x=0.6 is deduced from the anomalous effect. For samples treated at lower temperatures the anomalous effect is not observable, indicating an even lower selenium content in the nanocrystals (x≳0.7). We observed differences in the Se content of nanocrystals for different heat treatments of the same initial glass. These results may be very helpful to interpret the change in the optical properties when the temperature of the treatments decreases in the range 560-590 °C. In this temperature range, compositional effects seem to be of the same order of magnitude as the effects of the quantum confinement.

  4. Sample size and power calculations for detecting changes in malaria transmission using antibody seroconversion rate.

    PubMed

    Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris

    2015-12-30

    Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.

  5. Laser Diffraction Techniques Replace Sieving for Lunar Soil Particle Size Distribution Data

    NASA Technical Reports Server (NTRS)

    Cooper, Bonnie L.; Gonzalez, C. P.; McKay, D. S.; Fruland, R. L.

    2012-01-01

    Sieving was used extensively until 1999 to determine the particle size distribution of lunar samples. This method is time-consuming, and requires more than a gram of material in order to obtain a result in which one may have confidence. This is demonstrated by the difference in geometric mean and median for samples measured by [1], in which a 14-gram sample produced a geometric mean of approx.52 micrometers, whereas two other samples of 1.5 grams resulted in gave means of approx.63 and approx.69 micrometers. Sample allocations for sieving are typically much smaller than a gram, and many of the sample allocations received by our lab are 0.5 to 0.25 grams in mass. Basu [2] has described how the finest fraction of the soil is easily lost in the sieving process, and this effect is compounded when sample sizes are small.

  6. Epistemological Issues in Astronomy Education Research: How Big of a Sample is "Big Enough"?

    NASA Astrophysics Data System (ADS)

    Slater, Stephanie; Slater, T. F.; Souri, Z.

    2012-01-01

    As astronomy education research (AER) continues to evolve into a sophisticated enterprise, we must begin to grapple with defining our epistemological parameters. Moreover, as we attempt to make pragmatic use of our findings, we must make a concerted effort to communicate those parameters in a sensible way to the larger astronomical community. One area of much current discussion involves a basic discussion of methodologies, and subsequent sample sizes, that should be considered appropriate for generating knowledge in the field. To address this question, we completed a meta-analysis of nearly 1,000 peer-reviewed studies published in top tier professional journals. Data related to methodologies and sample sizes were collected from "hard science” and "human science” journals to compare the epistemological systems of these two bodies of knowledge. Working back in time from August 2011, the 100 most recent studies reported in each journal were used as a data source: Icarus, ApJ and AJ, NARST, IJSE and SciEd. In addition, data was collected from the 10 most recent AER dissertations, a set of articles determined by the science education community to be the most influential in the field, and the nearly 400 articles used as reference materials for the NRC's Taking Science to School. Analysis indicates these bodies of knowledge have a great deal in common; each relying on a large variety of methodologies, and each building its knowledge through studies that proceed from surprisingly low sample sizes. While both fields publish a small percentage of studies with large sample sizes, the vast majority of top tier publications consist of rich studies of a small number of objects. We conclude that rigor in each field is determined not by a circumscription of methodologies and sample sizes, but by peer judgments that the methods and sample sizes are appropriate to the research question.

  7. DRME: Count-based differential RNA methylation analysis at small sample size scenario.

    PubMed

    Liu, Lian; Zhang, Shao-Wu; Gao, Fan; Zhang, Yixin; Huang, Yufei; Chen, Runsheng; Meng, Jia

    2016-04-15

    Differential methylation, which concerns difference in the degree of epigenetic regulation via methylation between two conditions, has been formulated as a beta or beta-binomial distribution to address the within-group biological variability in sequencing data. However, a beta or beta-binomial model is usually difficult to infer at small sample size scenario with discrete reads count in sequencing data. On the other hand, as an emerging research field, RNA methylation has drawn more and more attention recently, and the differential analysis of RNA methylation is significantly different from that of DNA methylation due to the impact of transcriptional regulation. We developed DRME to better address the differential RNA methylation problem. The proposed model can effectively describe within-group biological variability at small sample size scenario and handles the impact of transcriptional regulation on RNA methylation. We tested the newly developed DRME algorithm on simulated and 4 MeRIP-Seq case-control studies and compared it with Fisher's exact test. It is in principle widely applicable to several other RNA-related data types as well, including RNA Bisulfite sequencing and PAR-CLIP. The code together with an MeRIP-Seq dataset is available online (https://github.com/lzcyzm/DRME) for evaluation and reproduction of the figures shown in this article. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Sampling methods for amphibians in streams in the Pacific Northwest.

    Treesearch

    R. Bruce Bury; Paul Stephen Corn

    1991-01-01

    Methods describing how to sample aquatic and semiaquatic amphibians in small streams and headwater habitats in the Pacific Northwest are presented. We developed a technique that samples 10-meter stretches of selected streams, which was adequate to detect presence or absence of amphibian species and provided sample sizes statistically sufficient to compare abundance of...

  9. Measuring Endocrine-active Chemicals at ng/L Concentrations in Water

    EPA Science Inventory

    Analytical chemistry challenges for supporting aquatic toxicity research and risk assessment are many: need for low detection limits, complex sample matrices, small sample size, and equipment limitations to name a few. Certain types of potent endocrine disrupting chemicals (EDCs)...

  10. Annual design-based estimation for the annualized inventories of forest inventory and analysis: sample size determination

    Treesearch

    Hans T. Schreuder; Jin-Mann S. Lin; John Teply

    2000-01-01

    The Forest Inventory and Analysis units in the USDA Forest Service have been mandated by Congress to go to an annualized inventory where a certain percentage of plots, say 20 percent, will be measured in each State each year. Although this will result in an annual sample size that will be too small for reliable inference for many areas, it is a sufficiently large...

  11. The cost of large numbers of hypothesis tests on power, effect size and sample size.

    PubMed

    Lazzeroni, L C; Ray, A

    2012-01-01

    Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing.

  12. The evolution of body size and shape in the human career

    PubMed Central

    Grabowski, Mark; Hatala, Kevin G.; Richmond, Brian G.

    2016-01-01

    Body size is a fundamental biological property of organisms, and documenting body size variation in hominin evolution is an important goal of palaeoanthropology. Estimating body mass appears deceptively simple but is laden with theoretical and pragmatic assumptions about best predictors and the most appropriate reference samples. Modern human training samples with known masses are arguably the ‘best’ for estimating size in early bipedal hominins such as the australopiths and all members of the genus Homo, but it is not clear if they are the most appropriate priors for reconstructing the size of the earliest putative hominins such as Orrorin and Ardipithecus. The trajectory of body size evolution in the early part of the human career is reviewed here and found to be complex and nonlinear. Australopith body size varies enormously across both space and time. The pre-erectus early Homo fossil record from Africa is poor and dominated by relatively small-bodied individuals, implying that the emergence of the genus Homo is probably not linked to an increase in body size or unprecedented increases in size variation. Body size differences alone cannot explain the observed variation in hominin body shape, especially when examined in the context of small fossil hominins and pygmy modern humans. This article is part of the themed issue ‘Major transitions in human evolution’. PMID:27298459

  13. The evolution of body size and shape in the human career.

    PubMed

    Jungers, William L; Grabowski, Mark; Hatala, Kevin G; Richmond, Brian G

    2016-07-05

    Body size is a fundamental biological property of organisms, and documenting body size variation in hominin evolution is an important goal of palaeoanthropology. Estimating body mass appears deceptively simple but is laden with theoretical and pragmatic assumptions about best predictors and the most appropriate reference samples. Modern human training samples with known masses are arguably the 'best' for estimating size in early bipedal hominins such as the australopiths and all members of the genus Homo, but it is not clear if they are the most appropriate priors for reconstructing the size of the earliest putative hominins such as Orrorin and Ardipithecus The trajectory of body size evolution in the early part of the human career is reviewed here and found to be complex and nonlinear. Australopith body size varies enormously across both space and time. The pre-erectus early Homo fossil record from Africa is poor and dominated by relatively small-bodied individuals, implying that the emergence of the genus Homo is probably not linked to an increase in body size or unprecedented increases in size variation. Body size differences alone cannot explain the observed variation in hominin body shape, especially when examined in the context of small fossil hominins and pygmy modern humans.This article is part of the themed issue 'Major transitions in human evolution'. © 2016 The Author(s).

  14. The large sample size fallacy.

    PubMed

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  15. A Generalized Least Squares Regression Approach for Computing Effect Sizes in Single-Case Research: Application Examples

    ERIC Educational Resources Information Center

    Maggin, Daniel M.; Swaminathan, Hariharan; Rogers, Helen J.; O'Keeffe, Breda V.; Sugai, George; Horner, Robert H.

    2011-01-01

    A new method for deriving effect sizes from single-case designs is proposed. The strategy is applicable to small-sample time-series data with autoregressive errors. The method uses Generalized Least Squares (GLS) to model the autocorrelation of the data and estimate regression parameters to produce an effect size that represents the magnitude of…

  16. Inherent size effects on XANES of nanometer metal clusters: Size-selected platinum clusters on silica

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Yang; Gorey, Timothy J.; Anderson, Scott L.

    2016-12-12

    X-ray absorption near-edge structure (XANES) is commonly used to probe the oxidation state of metal-containing nanomaterials, however, as the particle size in the material drops below a few nanometers, it becomes important to consider inherent size effects on the electronic structure of the materials. In this paper, we analyze a series of size-selected Pt n/SiO 2 samples, using X-ray photoelectron spectroscopy (XPS), low energy ion scattering, grazing-incidence small angle X-ray scattering, and XANES. The oxidation state and morphology are characterized both as-deposited in UHV, and after air/O 2 exposure and annealing in H 2. Here, the clusters are found tomore » be stable during deposition and upon air exposure, but sinter if heated above ~150 °C. XANES shows shifts in the Pt L 3 edge, relative to bulk Pt, that increase with decreasing cluster size, and the cluster samples show high white line intensity. Reference to bulk standards would suggest that the clusters are oxidized, however, XPS shows that they are not. Instead, the XANES effects are attributable to development of a band gap and localization of empty state wavefunctions in small clusters.« less

  17. Gravity or turbulence? IV. Collapsing cores in out-of-virial disguise

    NASA Astrophysics Data System (ADS)

    Ballesteros-Paredes, Javier; Vázquez-Semadeni, Enrique; Palau, Aina; Klessen, Ralf S.

    2018-06-01

    We study the dynamical state of massive cores by using a simple analytical model, an observational sample, and numerical simulations of collapsing massive cores. From the analytical model, we find that cores increase their column density and velocity dispersion as they collapse, resulting in a time evolution path in the Larson velocity dispersion-size diagram from large sizes and small velocity dispersions to small sizes and large velocity dispersions, while they tend to equipartition between gravity and kinetic energy. From the observational sample, we find that: (a) cores with substantially different column densities in the sample do not follow a Larson-like linewidth-size relation. Instead, cores with higher column densities tend to be located in the upper-left corner of the Larson velocity dispersion σv, 3D-size R diagram, a result explained in the hierarchical and chaotic collapse scenario. (b) Cores appear to have overvirial values. Finally, our numerical simulations reproduce the behavior predicted by the analytical model and depicted in the observational sample: collapsing cores evolve towards larger velocity dispersions and smaller sizes as they collapse and increase their column density. More importantly, however, they exhibit overvirial states. This apparent excess is due to the assumption that the gravitational energy is given by the energy of an isolated homogeneous sphere. However, such excess disappears when the gravitational energy is correctly calculated from the actual spatial mass distribution. We conclude that the observed energy budget of cores is consistent with their non-thermal motions being driven by their self-gravity and in the process of dynamical collapse.

  18. Understanding the City Size Wage Gap*

    PubMed Central

    Baum-Snow, Nathaniel; Pavan, Ronni

    2013-01-01

    In this paper, we decompose city size wage premia into various components. We base these decompositions on an estimated on-the-job search model that incorporates latent ability, search frictions, firm-worker match quality, human capital accumulation and endogenous migration between large, medium and small cities. Counterfactual simulations of the model indicate that variation in returns to experience and differences in wage intercepts across location type are the most important mechanisms contributing to observed city size wage premia. Variation in returns to experience is more important for generating wage premia between large and small locations while differences in wage intercepts are more important for generating wage premia betwen medium and small locations. Sorting on unobserved ability within education group and differences in labor market search frictions and distributions of firm-worker match quality contribute little to observed city size wage premia. These conclusions hold for separate samples of high school and college graduates. PMID:24273347

  19. Understanding the City Size Wage Gap.

    PubMed

    Baum-Snow, Nathaniel; Pavan, Ronni

    2012-01-01

    In this paper, we decompose city size wage premia into various components. We base these decompositions on an estimated on-the-job search model that incorporates latent ability, search frictions, firm-worker match quality, human capital accumulation and endogenous migration between large, medium and small cities. Counterfactual simulations of the model indicate that variation in returns to experience and differences in wage intercepts across location type are the most important mechanisms contributing to observed city size wage premia. Variation in returns to experience is more important for generating wage premia between large and small locations while differences in wage intercepts are more important for generating wage premia betwen medium and small locations. Sorting on unobserved ability within education group and differences in labor market search frictions and distributions of firm-worker match quality contribute little to observed city size wage premia. These conclusions hold for separate samples of high school and college graduates.

  20. Inertial impaction air sampling device

    DOEpatents

    Dewhurst, K.H.

    1990-05-22

    An inertial impactor is designed which is to be used in an air sampling device for collection of respirable size particles in ambient air. The device may include a graphite furnace as the impaction substrate in a small-size, portable, direct analysis structure that gives immediate results and is totally self-contained allowing for remote and/or personal sampling. The graphite furnace collects suspended particles transported through the housing by means of the air flow system, and these particles may be analyzed for elements, quantitatively and qualitatively, by atomic absorption spectrophotometry. 3 figs.

  1. A comparison of small-area estimation techniques to estimate selected stand attributes using LiDAR-derived auxiliary variables

    Treesearch

    Michael E. Goerndt; Vicente J. Monleon; Hailemariam Temesgen

    2011-01-01

    One of the challenges often faced in forestry is the estimation of forest attributes for smaller areas of interest within a larger population. Small-area estimation (SAE) is a set of techniques well suited to estimation of forest attributes for small areas in which the existing sample size is small and auxiliary information is available. Selected SAE methods were...

  2. Estimating the size of hidden populations using respondent-driven sampling data: Case examples from Morocco

    PubMed Central

    Johnston, Lisa G; McLaughlin, Katherine R; Rhilani, Houssine El; Latifi, Amina; Toufik, Abdalla; Bennani, Aziza; Alami, Kamal; Elomari, Boutaina; Handcock, Mark S

    2015-01-01

    Background Respondent-driven sampling is used worldwide to estimate the population prevalence of characteristics such as HIV/AIDS and associated risk factors in hard-to-reach populations. Estimating the total size of these populations is of great interest to national and international organizations, however reliable measures of population size often do not exist. Methods Successive Sampling-Population Size Estimation (SS-PSE) along with network size imputation allows population size estimates to be made without relying on separate studies or additional data (as in network scale-up, multiplier and capture-recapture methods), which may be biased. Results Ten population size estimates were calculated for people who inject drugs, female sex workers, men who have sex with other men, and migrants from sub-Sahara Africa in six different cities in Morocco. SS-PSE estimates fell within or very close to the likely values provided by experts and the estimates from previous studies using other methods. Conclusions SS-PSE is an effective method for estimating the size of hard-to-reach populations that leverages important information within respondent-driven sampling studies. The addition of a network size imputation method helps to smooth network sizes allowing for more accurate results. However, caution should be used particularly when there is reason to believe that clustered subgroups may exist within the population of interest or when the sample size is small in relation to the population. PMID:26258908

  3. Contrasting Size Distributions of Chondrules and Inclusions in Allende CV3

    NASA Technical Reports Server (NTRS)

    Fisher, Kent R.; Tait, Alastair W.; Simon, Jusin I.; Cuzzi, Jeff N.

    2014-01-01

    There are several leading theories on the processes that led to the formation of chondrites, e.g., sorting by mass, by X-winds, turbulent concentration, and by photophoresis. The juxtaposition of refractory inclusions (CAIs) and less refractory chondrules is central to these theories and there is much to be learned from their relative size distributions. There have been a number of studies into size distributions of particles in chondrites but only on relatively small scales primarily for chondrules, and rarely for both Calcium Aluminum-rich Inclusions (CAIs) and chondrules in the same sample. We have implemented macro-scale (25 cm diameter sample) and high-resolution microscale sampling of the Allende CV3 chondrite to create a complete data set of size frequencies for CAIs and chondrules.

  4. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    PubMed

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  5. Sample size in psychological research over the past 30 years.

    PubMed

    Marszalek, Jacob M; Barber, Carolyn; Kohlhart, Julie; Holmes, Cooper B

    2011-04-01

    The American Psychological Association (APA) Task Force on Statistical Inference was formed in 1996 in response to a growing body of research demonstrating methodological issues that threatened the credibility of psychological research, and made recommendations to address them. One issue was the small, even dramatically inadequate, size of samples used in studies published by leading journals. The present study assessed the progress made since the Task Force's final report in 1999. Sample sizes reported in four leading APA journals in 1955, 1977, 1995, and 2006 were compared using nonparametric statistics, while data from the last two waves were fit to a hierarchical generalized linear growth model for more in-depth analysis. Overall, results indicate that the recommendations for increasing sample sizes have not been integrated in core psychological research, although results slightly vary by field. This and other implications are discussed in the context of current methodological critique and practice.

  6. Accounting for between-study variation in incremental net benefit in value of information methodology.

    PubMed

    Willan, Andrew R; Eckermann, Simon

    2012-10-01

    Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.

  7. Accounting for randomness in measurement and sampling in studying cancer cell population dynamics.

    PubMed

    Ghavami, Siavash; Wolkenhauer, Olaf; Lahouti, Farshad; Ullah, Mukhtar; Linnebacher, Michael

    2014-10-01

    Knowing the expected temporal evolution of the proportion of different cell types in sample tissues gives an indication about the progression of the disease and its possible response to drugs. Such systems have been modelled using Markov processes. We here consider an experimentally realistic scenario in which transition probabilities are estimated from noisy cell population size measurements. Using aggregated data of FACS measurements, we develop MMSE and ML estimators and formulate two problems to find the minimum number of required samples and measurements to guarantee the accuracy of predicted population sizes. Our numerical results show that the convergence mechanism of transition probabilities and steady states differ widely from the real values if one uses the standard deterministic approach for noisy measurements. This provides support for our argument that for the analysis of FACS data one should consider the observed state as a random variable. The second problem we address is about the consequences of estimating the probability of a cell being in a particular state from measurements of small population of cells. We show how the uncertainty arising from small sample sizes can be captured by a distribution for the state probability.

  8. Isolation of nanoscale exosomes using viscoelastic effect

    NASA Astrophysics Data System (ADS)

    Hu, Guoqing; Liu, Chao

    2017-11-01

    Exosomes, molecular cargos secreted by almost all mammalian cells, are considered as promising biomarkers to identify many diseases including cancers. However, the small size of exosomes (30-200 nm) poses serious challenges on their isolation from the complex media containing a variety of extracellular vesicles (EVs) of different sizes, especially in small sample volumes. Here we develop a viscoelasticity-based microfluidic system to directly separate exosomes from cell culture media or serum in a continuous, size-dependent, and label-free manner. Using a small amount of biocompatible polymer as the additive into the media to control the viscoelastic forces exerted on EVs, we are able to achieve a high separation purity (>90%) and recovery (>80%) of exosomes. The size cutoff in viscoelasticity-based microfluidics can be easily controlled using different PEO concentrations. Based on this size-dependent viscoelastic separation strategy, we envision the handling of diverse nanoscale objects, such as gold nanoparticles, DNA origami structures, and quantum dots. This work was supported financially by National Natural Science Foundation of China (11572334, 91543125).

  9. The Missing Link: Workplace Education in Small Business.

    ERIC Educational Resources Information Center

    BCEL Newsletter for the Business & Literacy Communities, 1992

    1992-01-01

    A study sought to determine how and why small businesses invest or do not invest in basic skills instruction for their workers. Data were gathered through a national mail and telephone survey of a random sampling of 11,000 small (50 or fewer employees) and medium-sized (51-400 employees) firms, a targeted mail survey of 4,317 manufacturers, a…

  10. Small area estimation (SAE) model: Case study of poverty in West Java Province

    NASA Astrophysics Data System (ADS)

    Suhartini, Titin; Sadik, Kusman; Indahwati

    2016-02-01

    This paper showed the comparative of direct estimation and indirect/Small Area Estimation (SAE) model. Model selection included resolve multicollinearity problem in auxiliary variable, such as choosing only variable non-multicollinearity and implemented principal component (PC). Concern parameters in this paper were the proportion of agricultural venture poor households and agricultural poor households area level in West Java Province. The approach for estimating these parameters could be performed based on direct estimation and SAE. The problem of direct estimation, three area even zero and could not be conducted by directly estimation, because small sample size. The proportion of agricultural venture poor households showed 19.22% and agricultural poor households showed 46.79%. The best model from agricultural venture poor households by choosing only variable non-multicollinearity and the best model from agricultural poor households by implemented PC. The best estimator showed SAE better then direct estimation both of the proportion of agricultural venture poor households and agricultural poor households area level in West Java Province. The solution overcame small sample size and obtained estimation for small area was implemented small area estimation method for evidence higher accuracy and better precision improved direct estimator.

  11. Terrestrial in situ sampling of dust devils (relative particle loads and vertical grain size distributions) as an equivalent for martian dust devils.

    NASA Astrophysics Data System (ADS)

    Raack, J.; Dennis, R.; Balme, M. R.; Taj-Eddine, K.; Ori, G. G.

    2017-12-01

    Dust devils are small vertical convective vortices which occur on Earth and Mars [1] but their internal structure is almost unknown. Here we report on in situ samples of two active dust devils in the Sahara Desert in southern Morocco [2]. For the sampling we used a 4 m high aluminium pipe with sampling areas made of removable adhesive tape. We took samples between 0.1-4 m with a sampling interval of 0.5 m and between 0.5-2 m with an interval of 0.25 m, respectively. The maximum diameter of all particles of the different sampling heights were then measured using an optical microscope to gain vertical grain size distributions and relative particle loads. Our measurements imply that both dust devils have a general comparable internal structure despite their different strengths and dimensions which indicates that the dust devils probably represents the surficial grain size distribution they move over. The particle sizes within the dust devils decrease nearly exponential with height which is comparable to results by [3]. Furthermore, our results show that about 80-90 % of the total particle load were lifted only within the first meter, which is a direct evidence for the existence of a sand skirt. If we assume that grains with a diameter <31 μm can go into suspension [4], our results show that only less than 0.1 wt% can be entrained into the atmosphere. Although this amount seems very low, these values represent between 60 and 70 % of all lifted particles due to the small grain sizes and their low weight. On Mars, the amount of lifted particles will be general higher as the dust coverage is larger [5], although the atmosphere can only suspend smaller grain sizes ( <20 μm) [6] compared to Earth. During our field campaign we observed numerous larger dust devils each day which were up to several hundred meters tall and had diameters of several tens of meters. This implies a much higher input of fine grained material into the atmosphere (which will have an influence on the climate, weather, and human health [7]) compared to the relative small dust devils sampled during our field campaign. [1] Thomas and Gierasch (1985) Science 230 [2] Raack et al. (2017) Astrobiology [3] Oke et al. (2007) J. Arid Environ. 71 [4] Balme and Greeley (2006) Rev. Geophys. 44 [5] Christensen (1986) JGR 91 [6] Newman et al. (2002) JGR 107 [7] Gillette and Sinclair (1990) Atmos. Environ. 24

  12. Sample size calculations for cluster randomised crossover trials in Australian and New Zealand intensive care research.

    PubMed

    Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B

    2018-06-01

    The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.

  13. “Nanofiltration” Enabled by Super-Absorbent Polymer Beads for Concentrating Microorganisms in Water Samples

    NASA Astrophysics Data System (ADS)

    Xie, Xing; Bahnemann, Janina; Wang, Siwen; Yang, Yang; Hoffmann, Michael R.

    2016-02-01

    Detection and quantification of pathogens in water is critical for the protection of human health and for drinking water safety and security. When the pathogen concentrations are low, large sample volumes (several liters) are needed to achieve reliable quantitative results. However, most microbial identification methods utilize relatively small sample volumes. As a consequence, a concentration step is often required to detect pathogens in natural waters. Herein, we introduce a novel water sample concentration method based on superabsorbent polymer (SAP) beads. When SAP beads swell with water, small molecules can be sorbed within the beads, but larger particles are excluded and, thus, concentrated in the residual non-sorbed water. To illustrate this approach, millimeter-sized poly(acrylamide-co-itaconic acid) (P(AM-co-IA)) beads are synthesized and successfully applied to concentrate water samples containing two model microorganisms: Escherichia coli and bacteriophage MS2. Experimental results indicate that the size of the water channel within water swollen P(AM-co-IA) hydrogel beads is on the order of several nanometers. The millimeter size coupled with a negative surface charge of the beads are shown to be critical in order to achieve high levels of concentration. This new concentration procedure is very fast, effective, scalable, and low-cost with no need for complex instrumentation.

  14. "Nanofiltration" Enabled by Super-Absorbent Polymer Beads for Concentrating Microorganisms in Water Samples.

    PubMed

    Xie, Xing; Bahnemann, Janina; Wang, Siwen; Yang, Yang; Hoffmann, Michael R

    2016-02-15

    Detection and quantification of pathogens in water is critical for the protection of human health and for drinking water safety and security. When the pathogen concentrations are low, large sample volumes (several liters) are needed to achieve reliable quantitative results. However, most microbial identification methods utilize relatively small sample volumes. As a consequence, a concentration step is often required to detect pathogens in natural waters. Herein, we introduce a novel water sample concentration method based on superabsorbent polymer (SAP) beads. When SAP beads swell with water, small molecules can be sorbed within the beads, but larger particles are excluded and, thus, concentrated in the residual non-sorbed water. To illustrate this approach, millimeter-sized poly(acrylamide-co-itaconic acid) (P(AM-co-IA)) beads are synthesized and successfully applied to concentrate water samples containing two model microorganisms: Escherichia coli and bacteriophage MS2. Experimental results indicate that the size of the water channel within water swollen P(AM-co-IA) hydrogel beads is on the order of several nanometers. The millimeter size coupled with a negative surface charge of the beads are shown to be critical in order to achieve high levels of concentration. This new concentration procedure is very fast, effective, scalable, and low-cost with no need for complex instrumentation.

  15. “Nanofiltration” Enabled by Super-Absorbent Polymer Beads for Concentrating Microorganisms in Water Samples

    PubMed Central

    Xie, Xing; Bahnemann, Janina; Wang, Siwen; Yang, Yang; Hoffmann, Michael R.

    2016-01-01

    Detection and quantification of pathogens in water is critical for the protection of human health and for drinking water safety and security. When the pathogen concentrations are low, large sample volumes (several liters) are needed to achieve reliable quantitative results. However, most microbial identification methods utilize relatively small sample volumes. As a consequence, a concentration step is often required to detect pathogens in natural waters. Herein, we introduce a novel water sample concentration method based on superabsorbent polymer (SAP) beads. When SAP beads swell with water, small molecules can be sorbed within the beads, but larger particles are excluded and, thus, concentrated in the residual non-sorbed water. To illustrate this approach, millimeter-sized poly(acrylamide-co-itaconic acid) (P(AM-co-IA)) beads are synthesized and successfully applied to concentrate water samples containing two model microorganisms: Escherichia coli and bacteriophage MS2. Experimental results indicate that the size of the water channel within water swollen P(AM-co-IA) hydrogel beads is on the order of several nanometers. The millimeter size coupled with a negative surface charge of the beads are shown to be critical in order to achieve high levels of concentration. This new concentration procedure is very fast, effective, scalable, and low-cost with no need for complex instrumentation. PMID:26876979

  16. Kinematic measurement from panned cinematography.

    PubMed

    Gervais, P; Bedingfield, E W; Wronko, C; Kollias, I; Marchiori, G; Kuntz, J; Way, N; Kuiper, D

    1989-06-01

    Traditional 2-D cinematography has used a stationary camera with its optical axis perpendicular to the plane of motion. This method has constrained the size of the object plane or has introduced potential errors from a small subject image size with large object field widths. The purpose of this study was to assess a panning technique that could overcome the inherent limitations of small object field widths, small object image sizes and limited movement samples. The proposed technique used a series of reference targets in the object field that provided the necessary scales and origin translations. A 102 m object field was panned. Comparisons between criterion distances and film measured distances for field widths of 46 m and 22 m resulted in absolute mean differences that were comparable to that of the traditional method.

  17. Drying regimes in homogeneous porous media from macro- to nanoscale

    NASA Astrophysics Data System (ADS)

    Thiery, J.; Rodts, S.; Weitz, D. A.; Coussot, P.

    2017-07-01

    Magnetic resonance imaging visualization down to nanometric liquid films in model porous media with pore sizes from micro- to nanometers enables one to fully characterize the physical mechanisms of drying. For pore size larger than a few tens of nanometers, we identify an initial constant drying rate period, probing homogeneous desaturation, followed by a falling drying rate period. This second period is associated with the development of a gradient in saturation underneath the sample free surface that initiates the inward recession of the contact line. During this latter stage, the drying rate varies in accordance with vapor diffusion through the dry porous region, possibly affected by the Knudsen effect for small pore size. However, we show that for sufficiently small pore size and/or saturation the drying rate is increasingly reduced by the Kelvin effect. Subsequently, we demonstrate that this effect governs the kinetics of evaporation in nanopores as a homogeneous desaturation occurs. Eventually, under our experimental conditions, we show that the saturation unceasingly decreases in a homogeneous manner throughout the wet regions of the medium regardless of pore size or drying regime considered. This finding suggests the existence of continuous liquid flow towards the interface of higher evaporation, down to very low saturation or very small pore size. Paradoxically, even if this net flow is unidirectional and capillary driven, it corresponds to a series of diffused local capillary equilibrations over the full height of the sample, which might explain that a simple Darcy's law model does not predict the effect of scaling of the net flow rate on the pore size observed in our tests.

  18. Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.

    PubMed

    Wang, Zuozhen

    2018-01-01

    Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.

  19. Massively parallel sequencing of 17 commonly used forensic autosomal STRs and amelogenin with small amplicons.

    PubMed

    Kim, Eun Hye; Lee, Hwan Young; Yang, In Seok; Jung, Sang-Eun; Yang, Woo Ick; Shin, Kyoung-Jin

    2016-05-01

    The next-generation sequencing (NGS) method has been utilized to analyze short tandem repeat (STR) markers, which are routinely used for human identification purposes in the forensic field. Some researchers have demonstrated the successful application of the NGS system to STR typing, suggesting that NGS technology may be an alternative or additional method to overcome limitations of capillary electrophoresis (CE)-based STR profiling. However, there has been no available multiplex PCR system that is optimized for NGS analysis of forensic STR markers. Thus, we constructed a multiplex PCR system for the NGS analysis of 18 markers (13CODIS STRs, D2S1338, D19S433, Penta D, Penta E and amelogenin) by designing amplicons in the size range of 77-210 base pairs. Then, PCR products were generated from two single-sources, mixed samples and artificially degraded DNA samples using a multiplex PCR system, and were prepared for sequencing on the MiSeq system through construction of a subsequent barcoded library. By performing NGS and analyzing the data, we confirmed that the resultant STR genotypes were consistent with those of CE-based typing. Moreover, sequence variations were detected in targeted STR regions. Through the use of small-sized amplicons, the developed multiplex PCR system enables researchers to obtain successful STR profiles even from artificially degraded DNA as well as STR loci which are analyzed with large-sized amplicons in the CE-based commercial kits. In addition, successful profiles can be obtained from mixtures up to a 1:19 ratio. Consequently, the developed multiplex PCR system, which produces small size amplicons, can be successfully applied to STR NGS analysis of forensic casework samples such as mixtures and degraded DNA samples. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Experimental strategies for imaging bioparticles with femtosecond hard X-ray pulses

    PubMed Central

    Okamoto, Kenta; Bielecki, Johan; Maia, Filipe R. N. C.; Mühlig, Kerstin; Seibert, M. Marvin; Hantke, Max F.; Benner, W. Henry; Svenda, Martin; Ekeberg, Tomas; Loh, N. Duane; Pietrini, Alberto; Zani, Alessandro; Rath, Asawari D.; Westphal, Daniel; Kirian, Richard A.; Awel, Salah; Wiedorn, Max O.; van der Schot, Gijs; Carlsson, Gunilla H.; Hasse, Dirk; Sellberg, Jonas A.; Barty, Anton; Andreasson, Jakob; Boutet, Sébastien; Williams, Garth; Koglin, Jason; Hajdu, Janos; Larsson, Daniel S. D.

    2017-01-01

    This study explores the capabilities of the Coherent X-ray Imaging Instrument at the Linac Coherent Light Source to image small biological samples. The weak signal from small samples puts a significant demand on the experiment. Aerosolized Omono River virus particles of ∼40 nm in diameter were injected into the submicrometre X-ray focus at a reduced pressure. Diffraction patterns were recorded on two area detectors. The statistical nature of the measurements from many individual particles provided information about the intensity profile of the X-ray beam, phase variations in the wavefront and the size distribution of the injected particles. The results point to a wider than expected size distribution (from ∼35 to ∼300 nm in diameter). This is likely to be owing to nonvolatile contaminants from larger droplets during aerosolization and droplet evaporation. The results suggest that the concentration of nonvolatile contaminants and the ratio between the volumes of the initial droplet and the sample particles is critical in such studies. The maximum beam intensity in the focus was found to be 1.9 × 1012 photons per µm2 per pulse. The full-width of the focus at half-maximum was estimated to be 500 nm (assuming 20% beamline transmission), and this width is larger than expected. Under these conditions, the diffraction signal from a sample-sized particle remained above the average background to a resolution of 4.25 nm. The results suggest that reducing the size of the initial droplets during aerosolization is necessary to bring small particles into the scope of detailed structural studies with X-ray lasers. PMID:28512572

  1. Study of sample drilling techniques for Mars sample return missions

    NASA Technical Reports Server (NTRS)

    Mitchell, D. C.; Harris, P. T.

    1980-01-01

    To demonstrate the feasibility of acquiring various surface samples for a Mars sample return mission the following tasks were performed: (1) design of a Mars rover-mounted drill system capable of acquiring crystalline rock cores; prediction of performance, mass, and power requirements for various size systems, and the generation of engineering drawings; (2) performance of simulated permafrost coring tests using a residual Apollo lunar surface drill, (3) design of a rock breaker system which can be used to produce small samples of rock chips from rocks which are too large to return to Earth, but too small to be cored with the Rover-mounted drill; (4)design of sample containers for the selected regolith cores, rock cores, and small particulate or rock samples; and (5) design of sample handling and transfer techniques which will be required through all phase of sample acquisition, processing, and stowage on-board the Earth return vehicle. A preliminary design of a light-weight Rover-mounted sampling scoop was also developed.

  2. Sampling for Global Epidemic Models and the Topology of an International Airport Network

    PubMed Central

    Bobashev, Georgiy; Morris, Robert J.; Goedecke, D. Michael

    2008-01-01

    Mathematical models that describe the global spread of infectious diseases such as influenza, severe acute respiratory syndrome (SARS), and tuberculosis (TB) often consider a sample of international airports as a network supporting disease spread. However, there is no consensus on how many cities should be selected or on how to select those cities. Using airport flight data that commercial airlines reported to the Official Airline Guide (OAG) in 2000, we have examined the network characteristics of network samples obtained under different selection rules. In addition, we have examined different size samples based on largest flight volume and largest metropolitan populations. We have shown that although the bias in network characteristics increases with the reduction of the sample size, a relatively small number of areas that includes the largest airports, the largest cities, the most-connected cities, and the most central cities is enough to describe the dynamics of the global spread of influenza. The analysis suggests that a relatively small number of cities (around 200 or 300 out of almost 3000) can capture enough network information to adequately describe the global spread of a disease such as influenza. Weak traffic flows between small airports can contribute to noise and mask other means of spread such as the ground transportation. PMID:18776932

  3. Size effect on atomic structure in low-dimensional Cu-Zr amorphous systems.

    PubMed

    Zhang, W B; Liu, J; Lu, S H; Zhang, H; Wang, H; Wang, X D; Cao, Q P; Zhang, D X; Jiang, J Z

    2017-08-04

    The size effect on atomic structure of a Cu 64 Zr 36 amorphous system, including zero-dimensional small-size amorphous particles (SSAPs) and two-dimensional small-size amorphous films (SSAFs) together with bulk sample was investigated by molecular dynamics simulations. We revealed that sample size strongly affects local atomic structure in both Cu 64 Zr 36 SSAPs and SSAFs, which are composed of core and shell (surface) components. Compared with core component, the shell component of SSAPs has lower average coordination number and average bond length, higher degree of ordering, and lower packing density due to the segregation of Cu atoms on the shell of Cu 64 Zr 36 SSAPs. These atomic structure differences in SSAPs with various sizes result in different glass transition temperatures, in which the glass transition temperature for the shell component is found to be 577 K, which is much lower than 910 K for the core component. We further extended the size effect on the structure and glasses transition temperature to Cu 64 Zr 36 SSAFs, and revealed that the T g decreases when SSAFs becomes thinner due to the following factors: different dynamic motion (mean square displacement), different density of core and surface and Cu segregation on the surface of SSAFs. The obtained results here are different from the results for the size effect on atomic structure of nanometer-sized crystalline metallic alloys.

  4. Composition of hydroponic lettuce: effect of time of day, plant size, and season.

    PubMed

    Gent, Martin P N

    2012-02-01

    The diurnal variation of nitrate and sugars in leafy green vegetables may vary with plant size or the ability of plants to buffer the uptake, synthesis, and use of metabolites. Bibb lettuce was grown in hydroponics in a greenhouse and sampled at 3 h intervals throughout one day in August 2007 and another day in November 2008 to determine fresh weight, dry matter, and concentration of nitrate and sugars. Plantings differing in size and age were sampled on each date. The dry/fresh weight ratio increased during the daylight period. This increase was greater for small compared to large plants. On a fresh weight basis, tissue nitrate of small plants was only half that of larger plants. The variation in concentration with time was much less for nitrate than for soluble sugars. Soluble sugars were similar for all plant sizes early in the day, but they increased far more for small compared to large plants in the long days of summer. The greatest yield on a fresh weight basis was obtained by harvesting lettuce at dawn. Although dry matter or sugar content increased later in the day, there is no commercial benefit to delaying harvest as consumers do not buy lettuce for these attributes. Copyright © 2011 Society of Chemical Industry.

  5. A comparison of bootstrap methods and an adjusted bootstrap approach for estimating the prediction error in microarray classification.

    PubMed

    Jiang, Wenyu; Simon, Richard

    2007-12-20

    This paper first provides a critical review on some existing methods for estimating the prediction error in classifying microarray data where the number of genes greatly exceeds the number of specimens. Special attention is given to the bootstrap-related methods. When the sample size n is small, we find that all the reviewed methods suffer from either substantial bias or variability. We introduce a repeated leave-one-out bootstrap (RLOOB) method that predicts for each specimen in the sample using bootstrap learning sets of size ln. We then propose an adjusted bootstrap (ABS) method that fits a learning curve to the RLOOB estimates calculated with different bootstrap learning set sizes. The ABS method is robust across the situations we investigate and provides a slightly conservative estimate for the prediction error. Even with small samples, it does not suffer from large upward bias as the leave-one-out bootstrap and the 0.632+ bootstrap, and it does not suffer from large variability as the leave-one-out cross-validation in microarray applications. Copyright (c) 2007 John Wiley & Sons, Ltd.

  6. Externally pressurized porous cylinder for multiple surface aerosol generation and method of generation

    DOEpatents

    Apel, C.T.; Layman, L.R.; Gallimore, D.L.

    1988-05-10

    A nebulizer is described for generating aerosol having small droplet sizes and high efficiency at low sample introduction rates. The nebulizer has a cylindrical gas permeable active surface. A sleeve is disposed around the cylinder and gas is provided from the sleeve to the interior of the cylinder formed by the active surface. In operation, a liquid is provided to the inside of the gas permeable surface. The gas contacts the wetted surface and forms small bubbles which burst to form an aerosol. Those bubbles which are large are carried by momentum to another part of the cylinder where they are renebulized. This process continues until the entire sample is nebulized into aerosol sized droplets. 2 figs.

  7. Goodness-of-fit tests for discrete data: a review and an application to a health impairment scale.

    PubMed

    Horn, S D

    1977-03-01

    We review the advantages and disadvantages of several goodness-of-fit tests which may be used with discrete data: the multinomial test, the likelihood ratio test, the X2 test, the two-stage X2 test and the discrete Kolmogorov-Smirnov test. Although the X2 test is the best known and most widely used of these tests, its use with small sample sizes is controversial. If one has data which fall into ordered categories, then the discrete Kolmogorov-Smirnov test is an exact test which uses the information from the ordering and can be used for small sample sizes. We illustrate these points with an example of several analyses of health impairment data.

  8. Training of Existing Workers: Issues, Incentives and Models

    ERIC Educational Resources Information Center

    Mawer, Giselle; Jackson, Elaine

    2005-01-01

    This report presents issues associated with incentives for training existing workers in small to medium-sized firms, identified through a small sample of case studies from the retail, manufacturing, and building and construction industries. While the majority of employers recognise workforce skill levels are fundamental to the success of the…

  9. Four hundred or more participants needed for stable contingency table estimates of clinical prediction rule performance.

    PubMed

    Kent, Peter; Boyle, Eleanor; Keating, Jennifer L; Albert, Hanne B; Hartvigsen, Jan

    2017-02-01

    To quantify variability in the results of statistical analyses based on contingency tables and discuss the implications for the choice of sample size for studies that derive clinical prediction rules. An analysis of three pre-existing sets of large cohort data (n = 4,062-8,674) was performed. In each data set, repeated random sampling of various sample sizes, from n = 100 up to n = 2,000, was performed 100 times at each sample size and the variability in estimates of sensitivity, specificity, positive and negative likelihood ratios, posttest probabilities, odds ratios, and risk/prevalence ratios for each sample size was calculated. There were very wide, and statistically significant, differences in estimates derived from contingency tables from the same data set when calculated in sample sizes below 400 people, and typically, this variability stabilized in samples of 400-600 people. Although estimates of prevalence also varied significantly in samples below 600 people, that relationship only explains a small component of the variability in these statistical parameters. To reduce sample-specific variability, contingency tables should consist of 400 participants or more when used to derive clinical prediction rules or test their performance. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. A USANS/SANS study of the accessibility of pores in the Barnett Shale to methane and water

    USGS Publications Warehouse

    Ruppert, Leslie F.; Sakurovs, Richard; Blach, Tomasz P.; He, Lilin; Melnichenko, Yuri B.; Mildner, David F.; Alcantar-Lopez, Leo

    2013-01-01

    Shale is an increasingly important source of natural gas in the United States. The gas is held in fine pores that need to be accessed by horizontal drilling and hydrofracturing techniques. Understanding the nature of the pores may provide clues to making gas extraction more efficient. We have investigated two Mississippian Barnett Shale samples, combining small-angle neutron scattering (SANS) and ultrasmall-angle neutron scattering (USANS) to determine the pore size distribution of the shale over the size range 10 nm to 10 μm. By adding deuterated methane (CD4) and, separately, deuterated water (D2O) to the shale, we have identified the fraction of pores that are accessible to these compounds over this size range. The total pore size distribution is essentially identical for the two samples. At pore sizes >250 nm, >85% of the pores in both samples are accessible to both CD4 and D2O. However, differences in accessibility to CD4 are observed in the smaller pore sizes (~25 nm). In one sample, CD4 penetrated the smallest pores as effectively as it did the larger ones. In the other sample, less than 70% of the smallest pores (4, but they were still largely penetrable by water, suggesting that small-scale heterogeneities in methane accessibility occur in the shale samples even though the total porosity does not differ. An additional study investigating the dependence of scattered intensity with pressure of CD4 allows for an accurate estimation of the pressure at which the scattered intensity is at a minimum. This study provides information about the composition of the material immediately surrounding the pores. Most of the accessible (open) pores in the 25 nm size range can be associated with either mineral matter or high reflectance organic material. However, a complementary scanning electron microscopy investigation shows that most of the pores in these shale samples are contained in the organic components. The neutron scattering results indicate that the pores are not equally proportioned in the different constituents within the shale. There is some indication from the SANS results that the composition of the pore-containing material varies with pore size; the pore size distribution associated with mineral matter is different from that associated with organic phases.

  11. Applying information theory to small groups assessment: emotions and well-being at work.

    PubMed

    García-Izquierdo, Antonio León; Moreno, Blanca; García-Izquierdo, Mariano

    2010-05-01

    This paper explores and analyzes the relations between emotions and well-being in a sample of aviation personnel, passenger crew (flight attendants). There is an increasing interest in studying the influence of emotions and its role as psychosocial factors in the work environment as they are able to act as facilitators or shock absorbers. The contrast of the theoretical models by using traditional parametric techniques requires a large sample size to the efficient estimation of the coefficients that quantify the relations between variables. Since the available sample that we have is small, the most common size in European enterprises, we used the maximum entropy principle to explore the emotions that are involved in the psychosocial risks. The analyses show that this method takes advantage of the limited information available and guarantee an optimal estimation, the results of which are coherent with theoretical models and numerous empirical researches about emotions and well-being.

  12. In Situ Sampling of Relative Dust Devil Particle Loads and Their Vertical Grain Size Distributions.

    PubMed

    Raack, Jan; Reiss, Dennis; Balme, Matthew R; Taj-Eddine, Kamal; Ori, Gian Gabriele

    2017-04-19

    During a field campaign in the Sahara Desert in southern Morocco, spring 2012, we sampled the vertical grain size distribution of two active dust devils that exhibited different dimensions and intensities. With these in situ samples of grains in the vortices, it was possible to derive detailed vertical grain size distributions and measurements of the lifted relative particle load. Measurements of the two dust devils show that the majority of all lifted particles were only lifted within the first meter (∼46.5% and ∼61% of all particles; ∼76.5 wt % and ∼89 wt % of the relative particle load). Furthermore, ∼69% and ∼82% of all lifted sand grains occurred in the first meter of the dust devils, indicating the occurrence of "sand skirts." Both sampled dust devils were relatively small (∼15 m and ∼4-5 m in diameter) compared to dust devils in surrounding regions; nevertheless, measurements show that ∼58.5% to 73.5% of all lifted particles were small enough to go into suspension (<31 μm, depending on the used grain size classification). This relatively high amount represents only ∼0.05 to 0.15 wt % of the lifted particle load. Larger dust devils probably entrain larger amounts of fine-grained material into the atmosphere, which can have an influence on the climate. Furthermore, our results indicate that the composition of the surface, on which the dust devils evolved, also had an influence on the particle load composition of the dust devil vortices. The internal particle load structure of both sampled dust devils was comparable related to their vertical grain size distribution and relative particle load, although both dust devils differed in their dimensions and intensities. A general trend of decreasing grain sizes with height was also detected. Key Words: Mars-Dust devils-Planetary science-Desert soils-Atmosphere-Grain sizes. Astrobiology 17, xxx-xxx.

  13. Simultaneous analysis of small organic acids and humic acids using high performance size exclusion chromatography.

    PubMed

    Qin, Xiaopeng; Liu, Fei; Wang, Guangcai; Weng, Liping

    2012-12-01

    An accurate and fast method for simultaneous determination of small organic acids and much larger humic acids was developed using high performance size exclusion chromatography. Two small organic acids, i.e. salicylic acid and 2,3-dihydroxybenzoic acid, and one purified humic acid material were used in this study. Under the experimental conditions, the UV peaks of salicylic acid and 2,3-dihydroxybenzoic acid were well separated from the peaks of humic acid in the chromatogram. Concentrations of the two small organic acids could be accurately determined from their peak areas. The concentration of humic acid in the mixture could then be derived from mass balance calculations. The measured results agreed well with the nominal concentrations. The detection limits are 0.05 mg/L and 0.01 mg/L for salicylic acid and 2,3-dihydroxybenzoic acid, respectively. Applicability of the method to natural samples was tested using groundwater, glacier, and river water samples (both original and spiked with salicylic acid and 2,3-dihydroxybenzoic acid) with a total organic carbon concentration ranging from 2.1 to 179.5 mg C/L. The results obtained are promising, especially for groundwater samples and river water samples with a total organic carbon concentration below 9 mg C/L. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. A low-volume cavity ring-down spectrometer for sample-limited applications

    NASA Astrophysics Data System (ADS)

    Stowasser, C.; Farinas, A. D.; Ware, J.; Wistisen, D. W.; Rella, C.; Wahl, E.; Crosson, E.; Blunier, T.

    2014-08-01

    In atmospheric and environmental sciences, optical spectrometers are used for the measurements of greenhouse gas mole fractions and the isotopic composition of water vapor or greenhouse gases. The large sample cell volumes (tens of milliliters to several liters) in commercially available spectrometers constrain the usefulness of such instruments for applications that are limited in sample size and/or need to track fast variations in the sample stream. In an effort to make spectrometers more suitable for sample-limited applications, we developed a low-volume analyzer capable of measuring mole fractions of methane and carbon monoxide based on a commercial cavity ring-down spectrometer. The instrument has a small sample cell (9.6 ml) and can selectively be operated at a sample cell pressure of 140, 45, or 20 Torr (effective internal volume of 1.8, 0.57, and 0.25 ml). We present the new sample cell design and the flow path configuration, which are optimized for small sample sizes. To quantify the spectrometer's usefulness for sample-limited applications, we determine the renewal rate of sample molecules within the low-volume spectrometer. Furthermore, we show that the performance of the low-volume spectrometer matches the performance of the standard commercial analyzers by investigating linearity, precision, and instrumental drift.

  15. Interpreting survival data from clinical trials of surgery versus stereotactic body radiation therapy in operable Stage I non-small cell lung cancer patients.

    PubMed

    Samson, Pamela; Keogan, Kathleen; Crabtree, Traves; Colditz, Graham; Broderick, Stephen; Puri, Varun; Meyers, Bryan

    2017-01-01

    To identify the variability of short- and long-term survival outcomes among closed Phase III randomized controlled trials with small sample sizes comparing SBRT (stereotactic body radiation therapy) and surgical resection in operable clinical Stage I non-small cell lung cancer (NSCLC) patients. Clinical Stage I NSCLC patients who underwent surgery at our institution meeting the inclusion/exclusion criteria for STARS (Randomized Study to Compare CyberKnife to Surgical Resection in Stage I Non-small Cell Lung Cancer), ROSEL (Trial of Either Surgery or Stereotactic Radiotherapy for Early Stage (IA) Lung Cancer), or both were identified. Bootstrapping analysis provided 10,000 iterations to depict 30-day mortality and three-year overall survival (OS) in cohorts of 16 patients (to simulate the STARS surgical arm), 27 patients (to simulate the pooled surgical arms of STARS and ROSEL), and 515 (to simulate the goal accrual for the surgical arm of STARS). From 2000 to 2012, 749/873 (86%) of clinical Stage I NSCLC patients who underwent resection were eligible for STARS only, ROSEL only, or both studies. When patients eligible for STARS only were repeatedly sampled with a cohort size of 16, the 3-year OS rates ranged from 27 to 100%, and 30-day mortality varied from 0 to 25%. When patients eligible for ROSEL or for both STARS and ROSEL underwent bootstrapping with n=27, the 3-year OS ranged from 46 to 100%, while 30-day mortality varied from 0 to 15%. Finally, when patients eligible for STARS were repeatedly sampled in groups of 515, 3-year OS narrowed to 70-85%, with 30-day mortality varying from 0 to 4%. Short- and long-term survival outcomes from trials with small sample sizes are extremely variable and unreliable for extrapolation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Adaptive significance of small body size: strength and motor performance of school children in Mexico and Papua New Guinea.

    PubMed

    Malina, R M; Little, B B; Shoup, R F; Buschang, P H

    1987-08-01

    The postulated superior functional efficiency in association with reduced body size under conditions of chronic protein-energy undernutrition was considered in school children from rural Mexico and coastal Papua New Guinea. Grip strength and three measures of motor performance were measured in cross-sectional samples of children 6-16 years of age from a rural agricultural community in Oaxaca, Mexico, and from the coastal community Pere on Manus Island, Papua New Guinea. The strength and performance of a mixed-longitudinal sample of well nourished children from Philadelphia was used as a reference. The Oaxaca and Pere children are significantly shorter and lighter and are not as strong as the well nourished children. Motor performances of Pere children compare favorably to those of the better-nourished Philadelphia children, whereas those of the Oaxaca children are poorer. Throwing performance is more variable. When expressed relative to body size, strength is similar in the three samples, but the running and jumping performances of Pere children per unit body size are better than the relative performances of Oaxaca and Philadelphia children. Throwing performance per unit body size is better in the undernourished children. The influence of age, stature, and weight on the performance of Oaxaca and Pere children is generally similar to that for well nourished children. These results suggest that the hypothesized adaptive significance of small body size for the functional efficiency of populations living under conditions of chronic undernutrition varies between populations and with performance tasks.

  17. Sample size calculation for stepped wedge and other longitudinal cluster randomised trials.

    PubMed

    Hooper, Richard; Teerenstra, Steven; de Hoop, Esther; Eldridge, Sandra

    2016-11-20

    The sample size required for a cluster randomised trial is inflated compared with an individually randomised trial because outcomes of participants from the same cluster are correlated. Sample size calculations for longitudinal cluster randomised trials (including stepped wedge trials) need to take account of at least two levels of clustering: the clusters themselves and times within clusters. We derive formulae for sample size for repeated cross-section and closed cohort cluster randomised trials with normally distributed outcome measures, under a multilevel model allowing for variation between clusters and between times within clusters. Our formulae agree with those previously described for special cases such as crossover and analysis of covariance designs, although simulation suggests that the formulae could underestimate required sample size when the number of clusters is small. Whether using a formula or simulation, a sample size calculation requires estimates of nuisance parameters, which in our model include the intracluster correlation, cluster autocorrelation, and individual autocorrelation. A cluster autocorrelation less than 1 reflects a situation where individuals sampled from the same cluster at different times have less correlated outcomes than individuals sampled from the same cluster at the same time. Nuisance parameters could be estimated from time series obtained in similarly clustered settings with the same outcome measure, using analysis of variance to estimate variance components. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Porosity of the Marcellus Shale: A contrast matching small-angle neutron scattering study

    USGS Publications Warehouse

    Bahadur, Jitendra; Ruppert, Leslie F.; Pipich, Vitaliy; Sakurovs, Richard; Melnichenko, Yuri B.

    2018-01-01

    Neutron scattering techniques were used to determine the effect of mineral matter on the accessibility of water and toluene to pores in the Devonian Marcellus Shale. Three Marcellus Shale samples, representing quartz-rich, clay-rich, and carbonate-rich facies, were examined using contrast matching small-angle neutron scattering (CM-SANS) at ambient pressure and temperature. Contrast matching compositions of H2O, D2O and toluene, deuterated toluene were used to probe open and closed pores of these three shale samples. Results show that although the mean pore radius was approximately the same for all three samples, the fractal dimension of the quartz-rich sample was higher than for the clay-rich and carbonate-rich samples, indicating different pore size distributions among the samples. The number density of pores was highest in the clay-rich sample and lowest in the quartz-rich sample. Contrast matching with water and toluene mixtures shows that the accessibility of pores to water and toluene also varied among the samples. In general, water accessed approximately 70–80% of the larger pores (>80 nm radius) in all three samples. At smaller pore sizes (~5–80 nm radius), the fraction of accessible pores decreases. The lowest accessibility to both fluids is at pore throat size of ~25 nm radii with the quartz-rich sample exhibiting lower accessibility than the clay- and carbonate-rich samples. The mechanism for this behaviour is unclear, but because the mineralogy of the three samples varies, it is likely that the inaccessible pores in this size range are associated with organics and not a specific mineral within the samples. At even smaller pore sizes (~<2.5 nm radius), in all samples, the fraction of accessible pores to water increases again to approximately 70–80%. Accessibility to toluene generally follows that of water; however, in the smallest pores (~<2.5 nm radius), accessibility to toluene decreases, especially in the clay-rich sample which contains about 30% more closed pores than the quartz- and carbonate-rich samples. Results from this study show that mineralogy of producing intervals within a shale reservoir can affect accessibility of pores to water and toluene and these mineralogic differences may affect hydrocarbon storage and production and hydraulic fracturing characteristics

  19. Alternative Models for Small Samples in Psychological Research: Applying Linear Mixed Effects Models and Generalized Estimating Equations to Repeated Measures Data

    ERIC Educational Resources Information Center

    Muth, Chelsea; Bales, Karen L.; Hinde, Katie; Maninger, Nicole; Mendoza, Sally P.; Ferrer, Emilio

    2016-01-01

    Unavoidable sample size issues beset psychological research that involves scarce populations or costly laboratory procedures. When incorporating longitudinal designs these samples are further reduced by traditional modeling techniques, which perform listwise deletion for any instance of missing data. Moreover, these techniques are limited in their…

  20. Application of binomial and multinomial probability statistics to the sampling design process of a global grain tracing and recall system

    USDA-ARS?s Scientific Manuscript database

    Small, coded, pill-sized tracers embedded in grain are proposed as a method for grain traceability. A sampling process for a grain traceability system was designed and investigated by applying probability statistics using a science-based sampling approach to collect an adequate number of tracers fo...

  1. Decisions from Experience: Why Small Samples?

    ERIC Educational Resources Information Center

    Hertwig, Ralph; Pleskac, Timothy J.

    2010-01-01

    In many decisions we cannot consult explicit statistics telling us about the risks involved in our actions. In lieu of such data, we can arrive at an understanding of our dicey options by sampling from them. The size of the samples that we take determines, ceteris paribus, how good our choices will be. Studies of decisions from experience have…

  2. Experimental light scattering by ultrasonically controlled small particles - Implications for Planetary Science

    NASA Astrophysics Data System (ADS)

    Gritsevich, M.; Penttilä, A.; Maconi, G.; Kassamakov, I.; Markkanen, J.; Martikainen, J.; Väisänen, T.; Helander, P.; Puranen, T.; Salmi, A.; Hæggström, E.; Muinonen, K.

    2017-09-01

    We present the results obtained with our newly developed 3D scatterometer - a setup for precise multi-angular measurements of light scattered by mm- to µm-sized samples held in place by sound. These measurements are cross-validated against the modeled light-scattering characteristics of the sample, i.e., the intensity and the degree of linear polarization of the reflected light, calculated with state-of-the-art electromagnetic techniques. We demonstrate a unique non-destructive approach to derive the optical properties of small grain samples which facilitates research on highly valuable planetary materials, such as samples returned from space missions or rare meteorites.

  3. Sample size considerations using mathematical models: an example with Chlamydia trachomatis infection and its sequelae pelvic inflammatory disease.

    PubMed

    Herzog, Sereina A; Low, Nicola; Berghold, Andrea

    2015-06-19

    The success of an intervention to prevent the complications of an infection is influenced by the natural history of the infection. Assumptions about the temporal relationship between infection and the development of sequelae can affect the predicted effect size of an intervention and the sample size calculation. This study investigates how a mathematical model can be used to inform sample size calculations for a randomised controlled trial (RCT) using the example of Chlamydia trachomatis infection and pelvic inflammatory disease (PID). We used a compartmental model to imitate the structure of a published RCT. We considered three different processes for the timing of PID development, in relation to the initial C. trachomatis infection: immediate, constant throughout, or at the end of the infectious period. For each process we assumed that, of all women infected, the same fraction would develop PID in the absence of an intervention. We examined two sets of assumptions used to calculate the sample size in a published RCT that investigated the effect of chlamydia screening on PID incidence. We also investigated the influence of the natural history parameters of chlamydia on the required sample size. The assumed event rates and effect sizes used for the sample size calculation implicitly determined the temporal relationship between chlamydia infection and PID in the model. Even small changes in the assumed PID incidence and relative risk (RR) led to considerable differences in the hypothesised mechanism of PID development. The RR and the sample size needed per group also depend on the natural history parameters of chlamydia. Mathematical modelling helps to understand the temporal relationship between an infection and its sequelae and can show how uncertainties about natural history parameters affect sample size calculations when planning a RCT.

  4. Methodological approach for substantiating disease freedom in a heterogeneous small population. Application to ovine scrapie, a disease with a strong genetic susceptibility.

    PubMed

    Martinez, Marie-José; Durand, Benoit; Calavas, Didier; Ducrot, Christian

    2010-06-01

    Demonstrating disease freedom is becoming important in different fields including animal disease control. Most methods consider sampling only from a homogeneous population in which each animal has the same probability of becoming infected. In this paper, we propose a new methodology to calculate the probability of detecting the disease if it is present in a heterogeneous population of small size with potentially different risk groups, differences in risk being defined using relative risks. To calculate this probability, for each possible arrangement of the infected animals in the different groups, the probability that all the animals tested are test-negative given this arrangement is multiplied by the probability that this arrangement occurs. The probability formula is developed using the assumption of a perfect test and hypergeometric sampling for finite small size populations. The methodology is applied to scrapie, a disease affecting small ruminants and characterized in sheep by a strong genetic susceptibility defining different risk groups. It illustrates that the genotypes of the tested animals influence heavily the confidence level of detecting scrapie. The results present the statistical power for substantiating disease freedom in a small heterogeneous population as a function of the design prevalence, the structure of the sample tested, the structure of the herd and the associated relative risks. (c) 2010 Elsevier B.V. All rights reserved.

  5. Cloud condensation nuclei activity and droplet activation kinetics of wet processed regional dust samples and minerals

    NASA Astrophysics Data System (ADS)

    Kumar, P.; Sokolik, I. N.; Nenes, A.

    2011-08-01

    This study reports laboratory measurements of particle size distributions, cloud condensation nuclei (CCN) activity, and droplet activation kinetics of wet generated aerosols from clays, calcite, quartz, and desert soil samples from Northern Africa, East Asia/China, and Northern America. The dependence of critical supersaturation, sc, on particle dry diameter, Ddry, is used to characterize particle-water interactions and assess the ability of Frenkel-Halsey-Hill adsorption activation theory (FHH-AT) and Köhler theory (KT) to describe the CCN activity of the considered samples. Wet generated regional dust samples produce unimodal size distributions with particle sizes as small as 40 nm, CCN activation consistent with KT, and exhibit hygroscopicity similar to inorganic salts. Wet generated clays and minerals produce a bimodal size distribution; the CCN activity of the smaller mode is consistent with KT, while the larger mode is less hydrophilic, follows activation by FHH-AT, and displays almost identical CCN activity to dry generated dust. Ion Chromatography (IC) analysis performed on regional dust samples indicates a soluble fraction that cannot explain the CCN activity of dry or wet generated dust. A mass balance and hygroscopicity closure suggests that the small amount of ions (from low solubility compounds like calcite) present in the dry dust dissolve in the aqueous suspension during the wet generation process and give rise to the observed small hygroscopic mode. Overall these results identify an artifact that may question the atmospheric relevance of dust CCN activity studies using the wet generation method. Based on the method of threshold droplet growth analysis, wet generated mineral aerosols display similar activation kinetics compared to ammonium sulfate calibration aerosol. Finally, a unified CCN activity framework that accounts for concurrent effects of solute and adsorption is developed to describe the CCN activity of aged or hygroscopic dusts.

  6. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    NASA Astrophysics Data System (ADS)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  7. Effect of charcoal doping on the superconducting properties of MgB 2 bulk

    NASA Astrophysics Data System (ADS)

    Kim, N. K.; Tan, K. S.; Jun, B.-H.; Park, H. W.; Joo, J.; Kim, C.-J.

    2008-09-01

    The effect of charcoal doping on the superconducting properties of in situ processed MgB 2 bulk samples was investigated. To understand the size effect of the dopant the charcoal powder was attrition milled for 1 h, 3 h and 6 h using ZrO 2 balls. The milled charcoal powders were mixed with magnesium and boron powders to a nominal composition of Mg(B 0.975C 0.025) 2. The Mg(B 0.975C 0.025) 2 compacts were heat-treated at 900 °C for 0.5 h in flowing Ar atmosphere. Magnetic susceptibility for the samples showed that the superconducting transition temperature ( Tc) decreased as the size of the charcoal powder decreased. The critical current density ( Jc) of Mg(B 0.975C 0.025) 2 prepared using large size charcoal powder was lower than that of the undoped MgB 2. However, a crossover of Jc value was observed at high magnetic fields of about 4 T in Mg(B 0.975C 0.025) 2 prepared using small size charcoal powder. Carbon diffusion into the boron site was easier and gave the Jc increase effect when the small size charcoal was used as a dopant.

  8. Sizing for the apparel industry using statistical analysis - a Brazilian case study

    NASA Astrophysics Data System (ADS)

    Capelassi, C. H.; Carvalho, M. A.; El Kattel, C.; Xu, B.

    2017-10-01

    The study of the body measurements of Brazilian women used the Kinect Body Imaging system for 3D body scanning. The result of the study aims to meet the needs of the apparel industry for accurate measurements. Data was statistically treated using the IBM SPSS 23 system, with 95% confidence (P<0,05) for the inferential analysis, with the purpose of grouping the measurements in sizes, so that a smaller number of sizes can cover a greater number of people. The sample consisted of 101 volunteers aged between 19 and 62 years. A cluster analysis was performed to identify the main body shapes of the sample. The results were divided between the top and bottom body portions; For the top portion, were used the measurements of the abdomen, waist and bust circumferences, as well as the height; For the bottom portion, were used the measurements of the hip circumference and the height. Three sizing systems were developed for the researched sample from the Abdomen-to-Height Ratio - AHR (top portion): Small (AHR < 0,52), Medium (AHR: 0,52-0,58), Large (AHR > 0,58) and from the Hip-to-Height Ratio - HHR (bottom portion): Small (HHR < 0,62), Medium (HHR: 0,62-0,68), Large (HHR > 0,68).

  9. Estimating Standardized Linear Contrasts of Means with Desired Precision

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2009-01-01

    L. Wilkinson and the Task Force on Statistical Inference (1999) recommended reporting confidence intervals for measures of effect sizes. If the sample size is too small, the confidence interval may be too wide to provide meaningful information. Recently, K. Kelley and J. R. Rausch (2006) used an iterative approach to computer-generate tables of…

  10. Magnetic and critical properties of Pr0.6Sr0.4MnO3 nanocrystals prepared by a combination of the solid state reaction and the mechanical ball milling methods

    NASA Astrophysics Data System (ADS)

    Dung, Nguyen Thi; Linh, Dinh Chi; Huyen Yen, Pham Duc; Yu, Seong Cho; Van Dang, Nguyen; Dang Thanh, Tran

    2018-06-01

    Influence of the crystallite size on the magnetic and critical properties of nanocrystals has been investigated. The results show that Curie temperature and magnetization slightly decrease with decreasing average crystallite size . Based on the mean-field theory and the magnetic-field dependences of magnetization at different temperatures , we pointed out that the ferromagnetic-paramagnetic phase transition in the samples undergoes the second-order phase transition with the critical exponents (, , and ) close to those of the mean-field theory. However, there is a small deviation from those expected for the mean-field theory of the values of , and obtained for the samples. It means that short-range ferromagnetic interactions appear in the smaller particles. In other words, nanocrystals become more magnetically inhomogeneous with smaller crystallite sizes that could be explained by the presence of surface-related effects, lattice strain and distortions, which lead the strength of ferromagnetic interaction is decreased in the small crystallite sizes.

  11. Device and technique for in-process sampling and analysis of molten metals and other liquids presenting harsh sampling conditions

    DOEpatents

    Alvarez, J.L.; Watson, L.D.

    1988-01-21

    An apparatus and method for continuously analyzing liquids by creating a supersonic spray which is shaped and sized prior to delivery of the spray to a analysis apparatus. The gas and liquid is sheared into small particles which are of a size and uniformity to form a spray which can be controlled through adjustment of pressures and gas velocity. The spray is shaped by a concentric supplemental flow of gas. 5 figs.

  12. Pore-scale simulations of drainage in granular materials: Finite size effects and the representative elementary volume

    NASA Astrophysics Data System (ADS)

    Yuan, Chao; Chareyre, Bruno; Darve, Félix

    2016-09-01

    A pore-scale model is introduced for two-phase flow in dense packings of polydisperse spheres. The model is developed as a component of a more general hydromechanical coupling framework based on the discrete element method, which will be elaborated in future papers and will apply to various processes of interest in soil science, in geomechanics and in oil and gas production. Here the emphasis is on the generation of a network of pores mapping the void space between spherical grains, and the definition of local criteria governing the primary drainage process. The pore space is decomposed by Regular Triangulation, from which a set of pores connected by throats are identified. A local entry capillary pressure is evaluated for each throat, based on the balance of capillary pressure and surface tension at equilibrium. The model reflects the possible entrapment of disconnected patches of the receding wetting phase. It is validated by a comparison with drainage experiments. In the last part of the paper, a series of simulations are reported to illustrate size and boundary effects, key questions when studying small samples made of spherical particles be it in simulations or experiments. Repeated tests on samples of different sizes give evolution of water content which are not only scattered but also strongly biased for small sample sizes. More than 20,000 spheres are needed to reduce the bias on saturation below 0.02. Additional statistics are generated by subsampling a large sample of 64,000 spheres. They suggest that the minimal sampling volume for evaluating saturation is one hundred times greater that the sampling volume needed for measuring porosity with the same accuracy. This requirement in terms of sample size induces a need for efficient computer codes. The method described herein has a low algorithmic complexity in order to satisfy this requirement. It will be well suited to further developments toward coupled flow-deformation problems in which evolution of the microstructure require frequent updates of the pore network.

  13. The IGF1 small dog haplotype is derived from Middle Eastern grey wolves.

    PubMed

    Gray, Melissa M; Sutter, Nathan B; Ostrander, Elaine A; Wayne, Robert K

    2010-02-24

    A selective sweep containing the insulin-like growth factor 1 (IGF1) gene is associated with size variation in domestic dogs. Intron 2 of IGF1 contains a SINE element and single nucleotide polymorphism (SNP) found in all small dog breeds that is almost entirely absent from large breeds. In this study, we surveyed a large sample of grey wolf populations to better understand the ancestral pattern of variation at IGF1 with a particular focus on the distribution of the small dog haplotype and its relationship to the origin of the dog. We present DNA sequence data that confirms the absence of the derived small SNP allele in the intron 2 region of IGF1 in a large sample of grey wolves and further establishes the absence of a small dog associated SINE element in all wild canids and most large dog breeds. Grey wolf haplotypes from the Middle East have higher nucleotide diversity suggesting an origin there. Additionally, PCA and phylogenetic analyses suggests a closer kinship of the small domestic dog IGF1 haplotype with those from Middle Eastern grey wolves. The absence of both the SINE element and SNP allele in grey wolves suggests that the mutation for small body size post-dates the domestication of dogs. However, because all small dogs possess these diagnostic mutations, the mutations likely arose early in the history of domestic dogs. Our results show that the small dog haplotype is closely related to those in Middle Eastern wolves and is consistent with an ancient origin of the small dog haplotype there. Thus, in concordance with past archeological studies, our molecular analysis is consistent with the early evolution of small size in dogs from the Middle East.See associated opinion by Driscoll and Macdonald: http://jbiol.com/content/9/2/10.

  14. Assessment of interbreeding and introgression of farm genes into a small Scottish Atlantic salmon Salmo salar stock: ad hoc samples - ad hoc results?

    PubMed

    Verspoor, E; Knox, D; Marshall, S

    2016-12-01

    An eclectic set of tissues and existing data, including purposely collected samples, spanning 1997-2006, was used in an ad hoc assessment of hybridization and introgression of farmed wild Atlantic salmon Salmo salar in the small Loch na Thull (LnT) catchment in north-west Scotland. The catchment is in an area of marine farm production and contains freshwater smolt rearing cages. The LnT S. salar stock was found to be genetically distinctive from stocks in neighbouring rivers and, despite regular reports of feral farm S. salar, there was no evidence of physical or genetic mixing. This cannot be completely ruled out, however, and low level mixing with other local wild stocks has been suggested. The LnT population appeared underpinned by relatively smaller effective number of breeders (N eb ) and showed relatively low levels of genetic diversity, consistent with a small effective population size. Small sample sizes, an incomplete farm baseline and the use of non-diagnostic molecular markers, constrain the power of the analysis but the findings strongly support the LnT catchment having a genetically distinct wild S. salar population little affected by interbreeding with feral farm escapes. © 2016 The Fisheries Society of the British Isles.

  15. Applications of Small Area Estimation to Generalization with Subclassification by Propensity Scores

    ERIC Educational Resources Information Center

    Chan, Wendy

    2018-01-01

    Policymakers have grown increasingly interested in how experimental results may generalize to a larger population. However, recently developed propensity score-based methods are limited by small sample sizes, where the experimental study is generalized to a population that is at least 20 times larger. This is particularly problematic for methods…

  16. Anomalous small-angle scattering as a way to solve the Babinet principle problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boiko, M. E., E-mail: m.e.boiko@mail.ioffe.ru; Sharkov, M. D.; Boiko, A. M.

    2013-12-15

    X-ray absorption spectra (XAS) have been used to determine the absorption edges of atoms present in a sample under study. A series of small-angle X-ray scattering (SAXS) measurements using different monochromatic X-ray beams at different wavelengths near the absorption edges is performed to solve the Babinet principle problem. The sizes of clusters containing atoms determined by the method of XAS were defined in SAXS experiments. In contrast to differential X-ray porosimetry, anomalous SAXS makes it possible to determine sizes of clusters of different atomic compositions.

  17. Anomalous small-angle scattering as a way to solve the Babinet principle problem

    NASA Astrophysics Data System (ADS)

    Boiko, M. E.; Sharkov, M. D.; Boiko, A. M.; Bobyl, A. V.

    2013-12-01

    X-ray absorption spectra (XAS) have been used to determine the absorption edges of atoms present in a sample under study. A series of small-angle X-ray scattering (SAXS) measurements using different monochromatic X-ray beams at different wavelengths near the absorption edges is performed to solve the Babinet principle problem. The sizes of clusters containing atoms determined by the method of XAS were defined in SAXS experiments. In contrast to differential X-ray porosimetry, anomalous SAXS makes it possible to determine sizes of clusters of different atomic compositions.

  18. Three-year-olds obey the sample size principle of induction: the influence of evidence presentation and sample size disparity on young children's generalizations.

    PubMed

    Lawson, Chris A

    2014-07-01

    Three experiments with 81 3-year-olds (M=3.62years) examined the conditions that enable young children to use the sample size principle (SSP) of induction-the inductive rule that facilitates generalizations from large rather than small samples of evidence. In Experiment 1, children exhibited the SSP when exemplars were presented sequentially but not when exemplars were presented simultaneously. Results from Experiment 3 suggest that the advantage of sequential presentation is not due to the additional time to process the available input from the two samples but instead may be linked to better memory for specific individuals in the large sample. In addition, findings from Experiments 1 and 2 suggest that adherence to the SSP is mediated by the disparity between presented samples. Overall, these results reveal that the SSP appears early in development and is guided by basic cognitive processes triggered during the acquisition of input. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Incorporating Biological Knowledge into Evaluation of Casual Regulatory Hypothesis

    NASA Technical Reports Server (NTRS)

    Chrisman, Lonnie; Langley, Pat; Bay, Stephen; Pohorille, Andrew; DeVincenzi, D. (Technical Monitor)

    2002-01-01

    Biological data can be scarce and costly to obtain. The small number of samples available typically limits statistical power and makes reliable inference of causal relations extremely difficult. However, we argue that statistical power can be increased substantially by incorporating prior knowledge and data from diverse sources. We present a Bayesian framework that combines information from different sources and we show empirically that this lets one make correct causal inferences with small sample sizes that otherwise would be impossible.

  20. Spatial distribution and sequential sampling plans for Tuta absoluta (Lepidoptera: Gelechiidae) in greenhouse tomato crops.

    PubMed

    Cocco, Arturo; Serra, Giuseppe; Lentini, Andrea; Deliperi, Salvatore; Delrio, Gavino

    2015-09-01

    The within- and between-plant distribution of the tomato leafminer, Tuta absoluta (Meyrick), was investigated in order to define action thresholds based on leaf infestation and to propose enumerative and binomial sequential sampling plans for pest management applications in protected crops. The pest spatial distribution was aggregated between plants, and median leaves were the most suitable sample to evaluate the pest density. Action thresholds of 36 and 48%, 43 and 56% and 60 and 73% infested leaves, corresponding to economic thresholds of 1 and 3% damaged fruits, were defined for tomato cultivars with big, medium and small fruits respectively. Green's method was a more suitable enumerative sampling plan as it required a lower sampling effort. Binomial sampling plans needed lower average sample sizes than enumerative plans to make a treatment decision, with probabilities of error of <0.10. The enumerative sampling plan required 87 or 343 leaves to estimate the population density in extensive or intensive ecological studies respectively. Binomial plans would be more practical and efficient for control purposes, needing average sample sizes of 17, 20 and 14 leaves to take a pest management decision in order to avoid fruit damage higher than 1% in cultivars with big, medium and small fruits respectively. © 2014 Society of Chemical Industry.

  1. Sample Size and Correlational Inference

    ERIC Educational Resources Information Center

    Anderson, Richard B.; Doherty, Michael E.; Friedrich, Jeff C.

    2008-01-01

    In 4 studies, the authors examined the hypothesis that the structure of the informational environment makes small samples more informative than large ones for drawing inferences about population correlations. The specific purpose of the studies was to test predictions arising from the signal detection simulations of R. B. Anderson, M. E. Doherty,…

  2. Utility of Inferential Norming with Smaller Sample Sizes

    ERIC Educational Resources Information Center

    Zhu, Jianjun; Chen, Hsin-Yi

    2011-01-01

    We examined the utility of inferential norming using small samples drawn from the larger "Wechsler Intelligence Scales for Children-Fourth Edition" (WISC-IV) standardization data set. The quality of the norms was estimated with multiple indexes such as polynomial curve fit, percentage of cases receiving the same score, average absolute…

  3. Mass spectra features of biomass burning boiler and coal burning boiler emitted particles by single particle aerosol mass spectrometer.

    PubMed

    Xu, Jiao; Li, Mei; Shi, Guoliang; Wang, Haiting; Ma, Xian; Wu, Jianhui; Shi, Xurong; Feng, Yinchang

    2017-11-15

    In this study, single particle mass spectra signatures of both coal burning boiler and biomass burning boiler emitted particles were studied. Particle samples were suspended in clean Resuspension Chamber, and analyzed by ELPI and SPAMS simultaneously. The size distribution of BBB (biomass burning boiler sample) and CBB (coal burning boiler sample) are different, as BBB peaks at smaller size, and CBB peaks at larger size. Mass spectra signatures of two samples were studied by analyzing the average mass spectrum of each particle cluster extracted by ART-2a in different size ranges. In conclusion, BBB sample mostly consists of OC and EC containing particles, and a small fraction of K-rich particles in the size range of 0.2-0.5μm. In 0.5-1.0μm, BBB sample consists of EC, OC, K-rich and Al_Silicate containing particles; CBB sample consists of EC, ECOC containing particles, while Al_Silicate (including Al_Ca_Ti_Silicate, Al_Ti_Silicate, Al_Silicate) containing particles got higher fractions as size increase. The similarity of single particle mass spectrum signatures between two samples were studied by analyzing the dot product, results indicated that part of the single particle mass spectra of two samples in the same size range are similar, which bring challenge to the future source apportionment activity by using single particle aerosol mass spectrometer. Results of this study will provide physicochemical information of important sources which contribute to particle pollution, and will support source apportionment activities. Copyright © 2017. Published by Elsevier B.V.

  4. Non-parametric methods for cost-effectiveness analysis: the central limit theorem and the bootstrap compared.

    PubMed

    Nixon, Richard M; Wonderling, David; Grieve, Richard D

    2010-03-01

    Cost-effectiveness analyses (CEA) alongside randomised controlled trials commonly estimate incremental net benefits (INB), with 95% confidence intervals, and compute cost-effectiveness acceptability curves and confidence ellipses. Two alternative non-parametric methods for estimating INB are to apply the central limit theorem (CLT) or to use the non-parametric bootstrap method, although it is unclear which method is preferable. This paper describes the statistical rationale underlying each of these methods and illustrates their application with a trial-based CEA. It compares the sampling uncertainty from using either technique in a Monte Carlo simulation. The experiments are repeated varying the sample size and the skewness of costs in the population. The results showed that, even when data were highly skewed, both methods accurately estimated the true standard errors (SEs) when sample sizes were moderate to large (n>50), and also gave good estimates for small data sets with low skewness. However, when sample sizes were relatively small and the data highly skewed, using the CLT rather than the bootstrap led to slightly more accurate SEs. We conclude that while in general using either method is appropriate, the CLT is easier to implement, and provides SEs that are at least as accurate as the bootstrap. (c) 2009 John Wiley & Sons, Ltd.

  5. Experiments with central-limit properties of spatial samples from locally covariant random fields

    USGS Publications Warehouse

    Barringer, T.H.; Smith, T.E.

    1992-01-01

    When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.

  6. A post hoc evaluation of a sample size re-estimation in the Secondary Prevention of Small Subcortical Strokes study.

    PubMed

    McClure, Leslie A; Szychowski, Jeff M; Benavente, Oscar; Hart, Robert G; Coffey, Christopher S

    2016-10-01

    The use of adaptive designs has been increasing in randomized clinical trials. Sample size re-estimation is a type of adaptation in which nuisance parameters are estimated at an interim point in the trial and the sample size re-computed based on these estimates. The Secondary Prevention of Small Subcortical Strokes study was a randomized clinical trial assessing the impact of single- versus dual-antiplatelet therapy and control of systolic blood pressure to a higher (130-149 mmHg) versus lower (<130 mmHg) target on recurrent stroke risk in a two-by-two factorial design. A sample size re-estimation was performed during the Secondary Prevention of Small Subcortical Strokes study resulting in an increase from the planned sample size of 2500-3020, and we sought to determine the impact of the sample size re-estimation on the study results. We assessed the results of the primary efficacy and safety analyses with the full 3020 patients and compared them to the results that would have been observed had randomization ended with 2500 patients. The primary efficacy outcome considered was recurrent stroke, and the primary safety outcomes were major bleeds and death. We computed incidence rates for the efficacy and safety outcomes and used Cox proportional hazards models to examine the hazard ratios for each of the two treatment interventions (i.e. the antiplatelet and blood pressure interventions). In the antiplatelet intervention, the hazard ratio was not materially modified by increasing the sample size, nor did the conclusions regarding the efficacy of mono versus dual-therapy change: there was no difference in the effect of dual- versus monotherapy on the risk of recurrent stroke hazard ratios (n = 3020 HR (95% confidence interval): 0.92 (0.72, 1.2), p = 0.48; n = 2500 HR (95% confidence interval): 1.0 (0.78, 1.3), p = 0.85). With respect to the blood pressure intervention, increasing the sample size resulted in less certainty in the results, as the hazard ratio for higher versus lower systolic blood pressure target approached, but did not achieve, statistical significance with the larger sample (n = 3020 HR (95% confidence interval): 0.81 (0.63, 1.0), p = 0.089; n = 2500 HR (95% confidence interval): 0.89 (0.68, 1.17), p = 0.40). The results from the safety analyses were similar to 3020 and 2500 patients for both study interventions. Other trial-related factors, such as contracts, finances, and study management, were impacted as well. Adaptive designs can have benefits in randomized clinical trials, but do not always result in significant findings. The impact of adaptive designs should be measured in terms of both trial results, as well as practical issues related to trial management. More post hoc analyses of study adaptations will lead to better understanding of the balance between the benefits and the costs. © The Author(s) 2016.

  7. [Potentials in the regionalization of health indicators using small-area estimation methods : Exemplary results based on the 2009, 2010 and 2012 GEDA studies].

    PubMed

    Kroll, Lars Eric; Schumann, Maria; Müters, Stephan; Lampert, Thomas

    2017-12-01

    Nationwide health surveys can be used to estimate regional differences in health. Using traditional estimation techniques, the spatial depth for these estimates is limited due to the constrained sample size. So far - without special refreshment samples - results have only been available for larger populated federal states of Germany. An alternative is regression-based small-area estimation techniques. These models can generate smaller-scale data, but are also subject to greater statistical uncertainties because of the model assumptions. In the present article, exemplary regionalized results based on the studies "Gesundheit in Deutschland aktuell" (GEDA studies) 2009, 2010 and 2012, are compared to the self-rated health status of the respondents. The aim of the article is to analyze the range of regional estimates in order to assess the usefulness of the techniques for health reporting more adequately. The results show that the estimated prevalence is relatively stable when using different samples. Important determinants of the variation of the estimates are the achieved sample size on the district level and the type of the district (cities vs. rural regions). Overall, the present study shows that small-area modeling of prevalence is associated with additional uncertainties compared to conventional estimates, which should be taken into account when interpreting the corresponding findings.

  8. Results of a Pilot Study to Ameliorate Psychological and Behavioral Outcomes of Minority Stress Among Young Gay and Bisexual Men.

    PubMed

    Smith, Nathan Grant; Hart, Trevor A; Kidwai, Ammaar; Vernon, Julia R G; Blais, Martin; Adam, Barry

    2017-09-01

    Project PRIDE (Promoting Resilience In Discriminatory Environments) is an 8-session small group intervention aimed at reducing negative mental and behavioral health outcomes resulting from minority stress. This study reports the results of a one-armed pilot test of Project PRIDE, which aimed to examine the feasibility and potential for efficacy of the intervention in a sample of 33 gay and bisexual men aged 18 to 25. The intervention appeared feasible to administer in two different sites and all participants who completed posttreatment (n = 22) or follow-up (n = 19) assessments reported high satisfaction with the intervention. Small to large effect sizes were observed for increases in self-esteem; small effect sizes were found for decreases in loneliness and decreases in minority stress variables; and small and medium effect sizes were found for reductions in alcohol use and number of sex partners, respectively. Overall, Project PRIDE appears to be a feasible intervention with promise of efficacy. Copyright © 2017. Published by Elsevier Ltd.

  9. Two-Step Sintering Behavior of Sol-Gel Derived Dense and Submicron-Grained YIG Ceramics

    NASA Astrophysics Data System (ADS)

    Chen, Ruoyuan; Zhou, Jijun; Zheng, Liang; Zheng, Hui; Zheng, Peng; Ying, Zhihua; Deng, Jiangxia

    2018-04-01

    In this work, dense and submicron-grain yttrium iron garnet (YIG, Y3Fe5O12) ceramics were fabricated by a two-step sintering (TSS) method using nano-size YIG powder prepared by a citrate sol-gel method. The densification, microstructure, magnetic properties and ferromagnetic resonance (FMR) linewidth of the ceramics were investigated. The sample prepared at 1300°C in T 1, 1225°C in T 2 and 18 h holding time has a density higher than 98% of the theoretical value and exhibits a homogeneous microstructure with fine grain size (0.975 μm). In addition, the saturation magnetization ( M S) of this sample reaches 27.18 emu/g. High density and small grain size can also achieve small FMR linewidth. Consequently, these results show that the sol-gel process combined with the TSS process can effectively suppress grain-boundary migration while maintaining active grain-boundary diffusion to obtain dense and fine-grained YIG ceramics with appropriate magnetic properties.

  10. Metallographic Characterization of Wrought Depleted Uranium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forsyth, Robert Thomas; Hill, Mary Ann

    Metallographic characterization was performed on wrought depleted uranium (DU) samples taken from the longitudinal and transverse orientations from specific locations on two specimens. Characterization of the samples included general microstructure, inclusion analysis, grain size analysis, and microhardness testing. Comparisons of the characterization results were made to determine any differences based on specimen, sample orientation, or sample location. In addition, the characterization results for the wrought DU samples were also compared with data obtained from the metallographic characterization of cast DU samples previously characterized. No differences were observed in microstructure, inclusion size, morphology, and distribution, or grain size in regard tomore » specimen, location, or orientation for the wrought depleted uranium samples. However, a small difference was observed in average hardness with regard to orientation at the same locations within the same specimen. The longitudinal samples were slightly harder than the transverse samples from the same location of the same specimen. This was true for both wrought DU specimens. Comparing the wrought DU sample data with the previously characterized cast DU sample data, distinct differences in microstructure, inclusion size, morphology and distribution, grain size, and microhardness were observed. As expected, the microstructure of the wrought DU samples consisted of small recrystallized grains which were uniform, randomly oriented, and equiaxed with minimal twinning observed in only a few grains. In contrast, the cast DU microstructure consisted of large irregularly shaped grains with extensive twinning observed in most grains. Inclusions in the wrought DU samples were elongated, broken and cracked and light and dark phases were observed in some inclusions. The mean inclusion area percentage for the wrought DU samples ranged from 0.08% to 0.34% and the average density from all wrought DU samples was 1.62E+04/cm 2. Inclusions in the cast DU samples were equiaxed and intact with light and dark phases observed in some inclusions. The mean inclusion area percentage for the cast DU samples ranged from 0.93% to 1.00% and the average density from all wrought DU samples was 2.83E+04/cm 2. The average mean grain area from all wrought DU samples was 141 μm 2 while the average mean grain area from all cast DU samples was 1.7 mm2. The average Knoop microhardness from all wrought DU samples was 215 HK and the average Knoop microhardness from all cast DU samples was 264 HK.« less

  11. A LDR-PCR approach for multiplex polymorphisms genotyping of severely degraded DNA with fragment sizes <100 bp.

    PubMed

    Zhang, Zhen; Wang, Bao-Jie; Guan, Hong-Yu; Pang, Hao; Xuan, Jin-Feng

    2009-11-01

    Reducing amplicon sizes has become a major strategy for analyzing degraded DNA typical of forensic samples. However, amplicon sizes in current mini-short tandem repeat-polymerase chain reaction (PCR) and mini-sequencing assays are still not suitable for analysis of severely degraded DNA. In this study, we present a multiplex typing method that couples ligase detection reaction with PCR that can be used to identify single nucleotide polymorphisms and small-scale insertion/deletions in a sample of severely fragmented DNA. This method adopts thermostable ligation for allele discrimination and subsequent PCR for signal enhancement. In this study, four polymorphic loci were used to assess the ability of this technique to discriminate alleles in an artificially degraded sample of DNA with fragment sizes <100 bp. Our results showed clear allelic discrimination of single or multiple loci, suggesting that this method might aid in the analysis of extremely degraded samples in which allelic drop out of larger fragments is observed.

  12. Does increasing the size of bi-weekly samples of records influence results when using the Global Trigger Tool? An observational study of retrospective record reviews of two different sample sizes.

    PubMed

    Mevik, Kjersti; Griffin, Frances A; Hansen, Tonje E; Deilkås, Ellen T; Vonen, Barthold

    2016-04-25

    To investigate the impact of increasing sample of records reviewed bi-weekly with the Global Trigger Tool method to identify adverse events in hospitalised patients. Retrospective observational study. A Norwegian 524-bed general hospital trust. 1920 medical records selected from 1 January to 31 December 2010. Rate, type and severity of adverse events identified in two different samples sizes of records selected as 10 and 70 records, bi-weekly. In the large sample, 1.45 (95% CI 1.07 to 1.97) times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) were identified than in the small sample (27.2 adverse events/1000 patient days). Hospital-acquired infections were the most common category of adverse events in both the samples, and the distributions of the other categories of adverse events did not differ significantly between the samples. The distribution of severity level of adverse events did not differ between the samples. The findings suggest that while the distribution of categories and severity are not dependent on the sample size, the rate of adverse events is. Further studies are needed to conclude if the optimal sample size may need to be adjusted based on the hospital size in order to detect a more accurate rate of adverse events. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  13. Sampling errors in the estimation of empirical orthogonal functions. [for climatology studies

    NASA Technical Reports Server (NTRS)

    North, G. R.; Bell, T. L.; Cahalan, R. F.; Moeng, F. J.

    1982-01-01

    Empirical Orthogonal Functions (EOF's), eigenvectors of the spatial cross-covariance matrix of a meteorological field, are reviewed with special attention given to the necessary weighting factors for gridded data and the sampling errors incurred when too small a sample is available. The geographical shape of an EOF shows large intersample variability when its associated eigenvalue is 'close' to a neighboring one. A rule of thumb indicating when an EOF is likely to be subject to large sampling fluctuations is presented. An explicit example, based on the statistics of the 500 mb geopotential height field, displays large intersample variability in the EOF's for sample sizes of a few hundred independent realizations, a size seldom exceeded by meteorological data sets.

  14. Robust Covariate-Adjusted Log-Rank Statistics and Corresponding Sample Size Formula for Recurrent Events Data

    PubMed Central

    Song, Rui; Kosorok, Michael R.; Cai, Jianwen

    2009-01-01

    Summary Recurrent events data are frequently encountered in clinical trials. This article develops robust covariate-adjusted log-rank statistics applied to recurrent events data with arbitrary numbers of events under independent censoring and the corresponding sample size formula. The proposed log-rank tests are robust with respect to different data-generating processes and are adjusted for predictive covariates. It reduces to the Kong and Slud (1997, Biometrika 84, 847–862) setting in the case of a single event. The sample size formula is derived based on the asymptotic normality of the covariate-adjusted log-rank statistics under certain local alternatives and a working model for baseline covariates in the recurrent event data context. When the effect size is small and the baseline covariates do not contain significant information about event times, it reduces to the same form as that of Schoenfeld (1983, Biometrics 39, 499–503) for cases of a single event or independent event times within a subject. We carry out simulations to study the control of type I error and the comparison of powers between several methods in finite samples. The proposed sample size formula is illustrated using data from an rhDNase study. PMID:18162107

  15. The Relationship between Organizational Learning and SME Performance in Poland

    ERIC Educational Resources Information Center

    Michna, Anna

    2009-01-01

    Purpose: The purpose of this paper is to identify and define dimensions of organizational learning and the way it affects small- or medium-size enterprise (SME) performance. Design/methodology/approach: The empirical research is carried out in Polish SMEs (the sample size is 211 enterprises). In order to test the constructed hypotheses we use…

  16. Are Parents' Gender Schemas Related to Their Children's Gender-Related Cognitions? A Meta-Analysis.

    ERIC Educational Resources Information Center

    Tenenbaum, Harriet R.; Leaper, Campbell

    2002-01-01

    Used meta-analysis to examine relationship of parents' gender schemas and their offspring's gender-related cognitions, with samples ranging in age from infancy through early adulthood. Found a small but meaningful effect size (r=.16) indicating a positive correlation between parent gender schema and offspring measures. Effect sizes were influenced…

  17. Motion mitigation for lung cancer patients treated with active scanning proton therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grassberger, Clemens, E-mail: Grassberger.Clemens@mgh.harvard.edu; Dowdell, Stephen; Sharp, Greg

    2015-05-15

    Purpose: Motion interplay can affect the tumor dose in scanned proton beam therapy. This study assesses the ability of rescanning and gating to mitigate interplay effects during lung treatments. Methods: The treatments of five lung cancer patients [48 Gy(RBE)/4fx] with varying tumor size (21.1–82.3 cm{sup 3}) and motion amplitude (2.9–30.6 mm) were simulated employing 4D Monte Carlo. The authors investigated two spot sizes (σ ∼ 12 and ∼3 mm), three rescanning techniques (layered, volumetric, breath-sampled volumetric) and respiratory gating with a 30% duty cycle. Results: For 4/5 patients, layered rescanning 6/2 times (for the small/large spot size) maintains equivalent uniformmore » dose within the target >98% for a single fraction. Breath sampling the timing of rescanning is ∼2 times more effective than the same number of continuous rescans. Volumetric rescanning is sensitive to synchronization effects, which was observed in 3/5 patients, though not for layered rescanning. For the large spot size, rescanning compared favorably with gating in terms of time requirements, i.e., 2x-rescanning is on average a factor ∼2.6 faster than gating for this scenario. For the small spot size however, 6x-rescanning takes on average 65% longer compared to gating. Rescanning has no effect on normal lung V{sub 20} and mean lung dose (MLD), though it reduces the maximum lung dose by on average 6.9 ± 2.4/16.7 ± 12.2 Gy(RBE) for the large and small spot sizes, respectively. Gating leads to a similar reduction in maximum dose and additionally reduces V{sub 20} and MLD. Breath-sampled rescanning is most successful in reducing the maximum dose to the normal lung. Conclusions: Both rescanning (2–6 times, depending on the beam size) as well as gating was able to mitigate interplay effects in the target for 4/5 patients studied. Layered rescanning is superior to volumetric rescanning, as the latter suffers from synchronization effects in 3/5 patients studied. Gating minimizes the irradiated volume of normal lung more efficiently, while breath-sampled rescanning is superior in reducing maximum doses to organs at risk.« less

  18. Evaluating morphometric body mass prediction equations with a juvenile human test sample: accuracy and applicability to small-bodied hominins.

    PubMed

    Walker, Christopher S; Yapuncich, Gabriel S; Sridhar, Shilpa; Cameron, Noël; Churchill, Steven E

    2018-02-01

    Body mass is an ecologically and biomechanically important variable in the study of hominin biology. Regression equations derived from recent human samples allow for the reasonable prediction of body mass of later, more human-like, and generally larger hominins from hip joint dimensions, but potential differences in hip biomechanics across hominin taxa render their use questionable with some earlier taxa (i.e., Australopithecus spp.). Morphometric prediction equations using stature and bi-iliac breadth avoid this problem, but their applicability to early hominins, some of which differ in both size and proportions from modern adult humans, has not been demonstrated. Here we use mean stature, bi-iliac breadth, and body mass from a global sample of human juveniles ranging in age from 6 to 12 years (n = 530 age- and sex-specific group annual means from 33 countries/regions) to evaluate the accuracy of several published morphometric prediction equations when applied to small humans. Though the body proportions of modern human juveniles likely differ from those of small-bodied early hominins, human juveniles (like fossil hominins) often differ in size and proportions from adult human reference samples and, accordingly, serve as a useful model for assessing the robustness of morphometric prediction equations. Morphometric equations based on adults systematically underpredict body mass in the youngest age groups and moderately overpredict body mass in the older groups, which fall in the body size range of adult Australopithecus (∼26-46 kg). Differences in body proportions, notably the ratio of lower limb length to stature, influence predictive accuracy. Ontogenetic changes in these body proportions likely influence the shift in prediction error (from under- to overprediction). However, because morphometric equations are reasonably accurate when applied to this juvenile test sample, we argue these equations may be used to predict body mass in small-bodied hominins, despite the potential for some error induced by differing body proportions and/or extrapolation beyond the original reference sample range. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Bony pelvic canal size and shape in relation to body proportionality in humans.

    PubMed

    Kurki, Helen K

    2013-05-01

    Obstetric selection acts on the female pelvic canal to accommodate the human neonate and contributes to pelvic sexual dimorphism. There is a complex relationship between selection for obstetric sufficiency and for overall body size in humans. The relationship between selective pressures may differ among populations of different body sizes and proportions, as pelvic canal dimensions vary among populations. Size and shape of the pelvic canal in relation to body size and shape were examined using nine skeletal samples (total female n = 57; male n = 84) from diverse geographical regions. Pelvic, vertebral, and lower limb bone measurements were collected. Principal component analyses demonstrate pelvic canal size and shape differences among the samples. Male multivariate variance in pelvic shape is greater than female variance for North and South Africans. High-latitude samples have larger and broader bodies, and pelvic canals of larger size and, among females, relatively broader medio-lateral dimensions relative to low-latitude samples, which tend to display relatively expanded inlet antero-posterior (A-P) and posterior canal dimensions. Differences in canal shape exist among samples that are not associated with latitude or body size, suggesting independence of some canal shape characteristics from body size and shape. The South Africans are distinctive with very narrow bodies and small pelvic inlets relative to an elongated lower canal in A-P and posterior lengths. Variation in pelvic canal geometry among populations is consistent with a high degree of evolvability in the human pelvis. Copyright © 2013 Wiley Periodicals, Inc.

  20. Designing Work-Integrated Learning Placements That Improve Student Employability: Six Facets of the Curriculum That Matter

    ERIC Educational Resources Information Center

    Smith, Calvin; Ferns, Sonia; Russell, Leoni

    2016-01-01

    Research into work-integrated learning continues to show through a variety of small-scale and anecdotal studies, various positive impacts on student learning, work-readiness, personal and cognitive development and other outcomes. Seldom are these research findings strongly generalizable because of such factors as small sample sizes,…

  1. Size-selective separation of submicron particles in suspensions with ultrasonic atomization.

    PubMed

    Nii, Susumu; Oka, Naoyoshi

    2014-11-01

    Aqueous suspensions containing silica or polystyrene latex were ultrasonically atomized for separating particles of a specific size. With the help of a fog involving fine liquid droplets with a narrow size distribution, submicron particles in a limited size-range were successfully separated from suspensions. Performance of the separation was characterized by analyzing the size and the concentration of collected particles with a high resolution method. Irradiation of 2.4MHz ultrasound to sample suspensions allowed the separation of particles of specific size from 90 to 320nm without regarding the type of material. Addition of a small amount of nonionic surfactant, PONPE20 to SiO2 suspensions enhanced the collection of finer particles, and achieved a remarkable increase in the number of collected particles. Degassing of the sample suspension resulted in eliminating the separation performance. Dissolved air in suspensions plays an important role in this separation. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Effect of finite sample size on feature selection and classification: a simulation study.

    PubMed

    Way, Ted W; Sahiner, Berkman; Hadjiiski, Lubomir M; Chan, Heang-Ping

    2010-02-01

    The small number of samples available for training and testing is often the limiting factor in finding the most effective features and designing an optimal computer-aided diagnosis (CAD) system. Training on a limited set of samples introduces bias and variance in the performance of a CAD system relative to that trained with an infinite sample size. In this work, the authors conducted a simulation study to evaluate the performances of various combinations of classifiers and feature selection techniques and their dependence on the class distribution, dimensionality, and the training sample size. The understanding of these relationships will facilitate development of effective CAD systems under the constraint of limited available samples. Three feature selection techniques, the stepwise feature selection (SFS), sequential floating forward search (SFFS), and principal component analysis (PCA), and two commonly used classifiers, Fisher's linear discriminant analysis (LDA) and support vector machine (SVM), were investigated. Samples were drawn from multidimensional feature spaces of multivariate Gaussian distributions with equal or unequal covariance matrices and unequal means, and with equal covariance matrices and unequal means estimated from a clinical data set. Classifier performance was quantified by the area under the receiver operating characteristic curve Az. The mean Az values obtained by resubstitution and hold-out methods were evaluated for training sample sizes ranging from 15 to 100 per class. The number of simulated features available for selection was chosen to be 50, 100, and 200. It was found that the relative performance of the different combinations of classifier and feature selection method depends on the feature space distributions, the dimensionality, and the available training sample sizes. The LDA and SVM with radial kernel performed similarly for most of the conditions evaluated in this study, although the SVM classifier showed a slightly higher hold-out performance than LDA for some conditions and vice versa for other conditions. PCA was comparable to or better than SFS and SFFS for LDA at small samples sizes, but inferior for SVM with polynomial kernel. For the class distributions simulated from clinical data, PCA did not show advantages over the other two feature selection methods. Under this condition, the SVM with radial kernel performed better than the LDA when few training samples were available, while LDA performed better when a large number of training samples were available. None of the investigated feature selection-classifier combinations provided consistently superior performance under the studied conditions for different sample sizes and feature space distributions. In general, the SFFS method was comparable to the SFS method while PCA may have an advantage for Gaussian feature spaces with unequal covariance matrices. The performance of the SVM with radial kernel was better than, or comparable to, that of the SVM with polynomial kernel under most conditions studied.

  3. Modeling motor vehicle crashes using Poisson-gamma models: examining the effects of low sample mean values and small sample size on the estimation of the fixed dispersion parameter.

    PubMed

    Lord, Dominique

    2006-07-01

    There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum likelihood method. In an attempt to complement the outcome of the simulation study, Poisson-gamma models were fitted to crash data collected in Toronto, Ont. characterized by a low sample mean and small sample size. The study shows that a low sample mean combined with a small sample size can seriously affect the estimation of the dispersion parameter, no matter which estimator is used within the estimation process. The probability the dispersion parameter becomes unreliably estimated increases significantly as the sample mean and sample size decrease. Consequently, the results show that an unreliably estimated dispersion parameter can significantly undermine empirical Bayes (EB) estimates as well as the estimation of confidence intervals for the gamma mean and predicted response. The paper ends with recommendations about minimizing the likelihood of producing Poisson-gamma models with an unreliable dispersion parameter for modeling motor vehicle crashes.

  4. Microsystem strategies for sample preparation in biological detection.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James, Conrad D.; Galambos, Paul C.; Bennett, Dawn Jonita

    2005-03-01

    The objective of this LDRD was to develop microdevice strategies for dealing with samples to be examined in biological detection systems. This includes three sub-components: namely, microdevice fabrication, sample delivery to the microdevice, and sample processing within the microdevice. The first component of this work focused on utilizing Sandia's surface micromachining technology to fabricate small volume (nanoliter) fluidic systems for processing small quantities of biological samples. The next component was to develop interfaces for the surface-micromachined silicon devices. We partnered with Micronics, a commercial company, to produce fluidic manifolds for sample delivery to our silicon devices. Pressure testing was completedmore » to examine the strength of the bond between the pressure-sensitive adhesive layer and the silicon chip. We are also pursuing several other methods, both in house and external, to develop polymer-based fluidic manifolds for packaging silicon-based microfluidic devices. The second component, sample processing, is divided into two sub-tasks: cell collection and cell lysis. Cell collection was achieved using dielectrophoresis, which employs AC fields to collect cells at energized microelectrodes, while rejecting non-cellular particles. Both live and dead Staph. aureus bacteria have been collected using RF frequency dielectrophoresis. Bacteria have been separated from polystyrene microspheres using frequency-shifting dielectrophoresis. Computational modeling was performed to optimize device separation performance, and to predict particle response to the dielectrophoretic traps. Cell lysis is continuing to be pursued using microactuators to mechanically disrupt cell membranes. Novel thermal actuators, which can generate larger forces than previously tested electrostatic actuators, have been incorporated with and tested with cell lysis devices. Significant cell membrane distortion has been observed, but more experiments need to be conducted to determine the effects of the observed distortion on membrane integrity and cell viability. Finally, we are using a commercial PCR DNA amplification system to determine the limits of detectable sample size, and to examine the amplification of DNA bound to microspheres. Our objective is to use microspheres as capture-and-carry chaperones for small molecules such as DNA and proteins, enabling the capture and concentration of the small molecules using dielectrophoresis. Current tests demonstrated amplification of DNA bound to micron-sized polystyrene microspheres using 20-50 microliter volume size reactions.« less

  5. Determination of the influence of dispersion pattern of pesticide-resistant individuals on the reliability of resistance estimates using different sampling plans.

    PubMed

    Shah, R; Worner, S P; Chapman, R B

    2012-10-01

    Pesticide resistance monitoring includes resistance detection and subsequent documentation/ measurement. Resistance detection would require at least one (≥1) resistant individual(s) to be present in a sample to initiate management strategies. Resistance documentation, on the other hand, would attempt to get an estimate of the entire population (≥90%) of the resistant individuals. A computer simulation model was used to compare the efficiency of simple random and systematic sampling plans to detect resistant individuals and to document their frequencies when the resistant individuals were randomly or patchily distributed. A patchy dispersion pattern of resistant individuals influenced the sampling efficiency of systematic sampling plans while the efficiency of random sampling was independent of such patchiness. When resistant individuals were randomly distributed, sample sizes required to detect at least one resistant individual (resistance detection) with a probability of 0.95 were 300 (1%) and 50 (10% and 20%); whereas, when resistant individuals were patchily distributed, using systematic sampling, sample sizes required for such detection were 6000 (1%), 600 (10%) and 300 (20%). Sample sizes of 900 and 400 would be required to detect ≥90% of resistant individuals (resistance documentation) with a probability of 0.95 when resistant individuals were randomly dispersed and present at a frequency of 10% and 20%, respectively; whereas, when resistant individuals were patchily distributed, using systematic sampling, a sample size of 3000 and 1500, respectively, was necessary. Small sample sizes either underestimated or overestimated the resistance frequency. A simple random sampling plan is, therefore, recommended for insecticide resistance detection and subsequent documentation.

  6. Ferromagnetism appears in nitrogen implanted nanocrystalline diamond films

    NASA Astrophysics Data System (ADS)

    Remes, Zdenek; Sun, Shih-Jye; Varga, Marian; Chou, Hsiung; Hsu, Hua-Shu; Kromka, Alexander; Horak, Pavel

    2015-11-01

    The nanocrystalline diamond films turn to be ferromagnetic after implanting various nitrogen doses on them. Through this research, we confirm that the room-temperature ferromagnetism of the implanted samples is derived from the measurements of magnetic circular dichroism (MCD) and superconducting quantum interference device (SQUID). Samples with larger crystalline grains as well as higher implanted doses present more robust ferromagnetic signals at room temperature. Raman spectra indicate that the small grain-sized samples are much more disordered than the large grain-sized ones. We propose that a slightly large saturated ferromagnetism could be observed at low temperature, because the increased localization effects have a significant impact on more disordered structure.

  7. Utility of the Mantel-Haenszel Procedure for Detecting Differential Item Functioning in Small Samples

    ERIC Educational Resources Information Center

    Fidalgo, Angel M.; Ferreres, Doris; Muniz, Jose

    2004-01-01

    Sample-size restrictions limit the contingency table approaches based on asymptotic distributions, such as the Mantel-Haenszel (MH) procedure, for detecting differential item functioning (DIF) in many practical applications. Within this framework, the present study investigated the power and Type I error performance of empirical and inferential…

  8. The Effects of Maternal Social Phobia on Mother-Infant Interactions and Infant Social Responsiveness

    ERIC Educational Resources Information Center

    Murray, Lynne; Cooper, Peter; Creswell, Cathy; Schofield, Elizabeth; Sack, Caroline

    2007-01-01

    Background: Social phobia aggregates in families. The genetic contribution to intergenerational transmission is modest, and parenting is considered important. Research on the effects of social phobia on parenting has been subject to problems of small sample size, heterogeneity of samples and lack of specificity of observational frameworks. We…

  9. Pediatric Disability and Caregiver Separation

    ERIC Educational Resources Information Center

    McCoyd, Judith L. M.; Akincigil, Ayse; Paek, Eun Kwang

    2010-01-01

    The evidence that the birth of a child with a disability leads to divorce or separation is equivocal, with the majority of recent research suggesting that such a birth and childrearing may be stressful, but not necessarily toxic, to the caregiver relationship. Such research has been limited by small sample sizes and nonrepresentative samples and…

  10. Methods for measuring populations of small, diurnal forest birds.

    Treesearch

    D.A. Manuwal; A.B. Carey

    1991-01-01

    Before a bird population is measured, the objectives of the study should be clearly defined. Important factors to be considered in designing a study are study site selection, plot size or transect length, distance between sampling points, duration of counts, and frequency and timing of sampling. Qualified field personnel are especially important. Assumptions applying...

  11. The ex situ conservation strategy for endangered plant species: small samples, storage and lessons from seed collected from US national parks

    USDA-ARS?s Scientific Manuscript database

    Ex situ collections of seeds sampled from wild populations provide germplasm for restoration and for scientific study about biological diversity. Seed collections of endangered species are urgent because they might forestall ever-dwindling population size and genetic diversity. However, collecting ...

  12. Digital image processing of nanometer-size metal particles on amorphous substrates

    NASA Technical Reports Server (NTRS)

    Soria, F.; Artal, P.; Bescos, J.; Heinemann, K.

    1989-01-01

    The task of differentiating very small metal aggregates supported on amorphous films from the phase contrast image features inherently stemming from the support is extremely difficult in the nanometer particle size range. Digital image processing was employed to overcome some of the ambiguities in evaluating such micrographs. It was demonstrated that such processing allowed positive particle detection and a limited degree of statistical size analysis even for micrographs where by bare eye examination the distribution between particles and erroneous substrate features would seem highly ambiguous. The smallest size class detected for Pd/C samples peaks at 0.8 nm. This size class was found in various samples prepared under different evaporation conditions and it is concluded that these particles consist of 'a magic number' of 13 atoms and have cubooctahedral or icosahedral crystal structure.

  13. Particle size and surface area effects on the thin-pulse shock initiation of Diaminoazoxyfurazan (DAAF)

    NASA Astrophysics Data System (ADS)

    Burritt, Rosemary; Francois, Elizabeth; Windler, Gary; Chavez, David

    2017-06-01

    Diaminoazoxyfurazan (DAAF) has many of the safety characteristics of an insensitive high explosive (IHE): it is extremely insensitive to impact and friction and is comparable to triaminotrinitrobezene (TATB) in this way. Conversely, it demonstrates many performance characteristics of a Conventional High Explosive (CHE). DAAF has a small failure diameter of about 1.25 mm and can be sensitive to shock under the right conditions. Large particle sized DAAF will not initiate in a typical exploding foil initiator (EFI) configuration but smaller particle sizes will. Large particle sized DAAF, of 40 μm, was crash precipitated and ball milled into six distinct samples and pressed into pellets with a density of 1.60 g/cc (91% TMD). To investigate the effect of particle size and surface area on the direct initiation on DAAF multiple threshold tests were preformed on each sample of DAAF in different EFI configurations, which varied in flyer thickness and/or bridge size. Comparative tests were performed examining threshold voltage and correlated to Photon Doppler Velocimetry (PDV) results. The samples with larger particle sizes and surface area required more energy to initiate while the smaller particle sizes required less energy and could be initiated with smaller diameter flyers.

  14. Sediment loads and transport at constructed chutes along the Missouri River - Upper Hamburg Chute near Nebraska City, Nebraska, and Kansas Chute near Peru, Nebraska

    USGS Publications Warehouse

    Densmore, Brenda K.; Rus, David L.; Moser, Matthew T.; Hall, Brent M.; Andersen, Michael J.

    2016-02-04

    Comparisons of concentrations and loads from EWI samples collected from different transects within a study site resulted in few significant differences, but comparisons are limited by small sample sizes and large within-transect variability. When comparing the Missouri River upstream transect to the chute inlet transect, similar results were determined in 2012 as were determined in 2008—the chute inlet affected the amount of sediment entering the chute from the main channel. In addition, the Kansas chute is potentially affecting the sediment concentration within the Missouri River main channel, but small sample size and construction activities within the chute limit the ability to fully understand either the effect of the chute in 2012 or the effect of the chute on the main channel during a year without construction. Finally, some differences in SSC were detected between the Missouri River upstream transects and the chute downstream transects; however, the effect of the chutes on the Missouri River main-channel sediment transport was difficult to isolate because of construction activities and sampling variability.

  15. Analysis of Crystallographic Structure of a Japanese Sword by the Pulsed Neutron Transmission Method

    NASA Astrophysics Data System (ADS)

    Kino, K.; Ayukawa, N.; Kiyanagi, Y.; Uchida, T.; Uno, S.; Grazzi, F.; Scherillo, A.

    We measured two-dimensional transmission spectra of pulsed neutron beams for a Japanese sword sample. Atom density, crystalline size, and preferred orientation of crystals were obtained using the RITS code. The position dependence of the atomic density is consistent with the shape of the sample. The crystalline size is very small and shows position dependence, which is understood by the unique structure of Japanese swords. The preferred orientation has strong position dependence. Our study shows the usefulness of the pulsed neutron transmission method for cultural metal artifacts.

  16. Self-navigation of a scanning tunneling microscope tip toward a micron-sized graphene sample.

    PubMed

    Li, Guohong; Luican, Adina; Andrei, Eva Y

    2011-07-01

    We demonstrate a simple capacitance-based method to quickly and efficiently locate micron-sized conductive samples, such as graphene flakes, on insulating substrates in a scanning tunneling microscope (STM). By using edge recognition, the method is designed to locate and to identify small features when the STM tip is far above the surface, allowing for crash-free search and navigation. The method can be implemented in any STM environment, even at low temperatures and in strong magnetic field, with minimal or no hardware modifications.

  17. Device and technique for in-process sampling and analysis of molten metals and other liquids presenting harsh sampling conditions

    DOEpatents

    Alvarez, Joseph L.; Watson, Lloyd D.

    1989-01-01

    An apparatus and method for continuously analyzing liquids by creating a supersonic spray which is shaped and sized prior to delivery of the spray to a analysis apparatus. The gas and liquid are mixed in a converging-diverging nozzle where the liquid is sheared into small particles which are of a size and uniformly to form a spray which can be controlled through adjustment of pressures and gas velocity. The spray is shaped by a concentric supplemental flow of gas.

  18. Replicability and Robustness of GWAS for Behavioral Traits

    PubMed Central

    Rietveld, Cornelius A.; Conley, Dalton; Eriksson, Nicholas; Esko, Tõnu; Medland, Sarah E.; Vinkhuyzen, Anna A.E.; Yang, Jian; Boardman, Jason D.; Chabris, Christopher F.; Dawes, Christopher T.; Domingue, Benjamin W.; Hinds, David A.; Johannesson, Magnus; Kiefer, Amy K.; Laibson, David; Magnusson, Patrik K. E.; Mountain, Joanna L.; Oskarsson, Sven; Rostapshova, Olga; Teumer, Alexander; Tung, Joyce Y.; Visscher, Peter M.; Benjamin, Daniel J.; Cesarini, David; Koellinger, Philipp D.

    2015-01-01

    A recent genome-wide association study (GWAS) of educational attainment identified three single-nucleotide polymorphisms (SNPs) that, despite their small effect sizes (each R2 ≈ 0.02%), reached genome-wide significance (p < 5×10−8) in a large discovery sample and replicated in an independent sample (p < 0.05). The study also reported associations between educational attainment and indices of SNPs called “polygenic scores.” We evaluate the robustness of these findings. Study 1 finds that all three SNPs replicate in another large (N = 34,428) independent sample. We also find that the scores remain predictive (R2 ≈ 2%) with stringent controls for stratification (Study 2) and in new within-family analyses (Study 3). Our results show that large and therefore well-powered GWASs can identify replicable genetic associations with behavioral traits. The small effect sizes of individual SNPs are likely to be a major contributing explanation for the striking contrast between our results and the disappointing replication record of most candidate gene studies. PMID:25287667

  19. Consultant-Client Relationship and Knowledge Transfer in Small- and Medium-Sized Enterprises Change Processes.

    PubMed

    Martinez, Luis F; Ferreira, Aristides I; Can, Amina B

    2016-04-01

    Based on Szulanski's knowledge transfer model, this study examined how the communicational, motivational, and sharing of understanding variables influenced knowledge transfer and change processes in small- and medium-sized enterprises, particularly under projects developed by funded programs. The sample comprised 144 entrepreneurs, mostly male (65.3%) and mostly ages 35 to 45 years (40.3%), who filled an online questionnaire measuring the variables of "sharing of understanding," "motivation," "communication encoding competencies," "source credibility," "knowledge transfer," and "organizational change." Data were collected between 2011 and 2012 and measured the relationship between clients and consultants working in a Portuguese small- and medium-sized enterprise-oriented action learning program. To test the hypotheses, structural equation modeling was conducted to identify the antecedents of sharing of understanding, motivational, and communicational variables, which were positively correlated with the knowledge transfer between consultants and clients. This transfer was also positively correlated with organizational change. Overall, the study provides important considerations for practitioners and academicians and establishes new avenues for future studies concerning the issues of consultant-client relationship and the efficacy of Government-funded programs designed to improve performance of small- and medium-sized enterprises. © The Author(s) 2016.

  20. A Monte Carlo Study of Levene's Test of Homogeneity of Variance: Empirical Frequencies of Type I Error in Normal Distributions.

    ERIC Educational Resources Information Center

    Neel, John H.; Stallings, William M.

    An influential statistics test recommends a Levene text for homogeneity of variance. A recent note suggests that Levene's test is upwardly biased for small samples. Another report shows inflated Alpha estimates and low power. Neither study utilized more than two sample sizes. This Monte Carlo study involved sampling from a normal population for…

  1. Using pilot data to size a two-arm randomized trial to find a nearly optimal personalized treatment strategy.

    PubMed

    Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R

    2016-04-15

    A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Microfluidic interconnects

    DOEpatents

    Benett, William J.; Krulevitch, Peter A.

    2001-01-01

    A miniature connector for introducing microliter quantities of solutions into microfabricated fluidic devices. The fluidic connector, for example, joins standard high pressure liquid chromatography (HPLC) tubing to 1 mm diameter holes in silicon or glass, enabling ml-sized volumes of sample solutions to be merged with .mu.l-sized devices. The connector has many features, including ease of connect and disconnect; a small footprint which enables numerous connectors to be located in a small area; low dead volume; helium leak-tight; and tubing does not twist during connection. Thus the connector enables easy and effective change of microfluidic devices and introduction of different solutions in the devices.

  3. Measuring the molecular dimensions of wine tannins: comparison of small-angle X-ray scattering, gel-permeation chromatography and mean degree of polymerization.

    PubMed

    McRae, Jacqui M; Kirby, Nigel; Mertens, Haydyn D T; Kassara, Stella; Smith, Paul A

    2014-07-23

    The molecular size of wine tannins can influence astringency, and yet it has been unclear as to whether the standard methods for determining average tannin molecular weight (MW), including gel-permeation chromatography (GPC) and depolymerization reactions, are actually related to the size of the tannin in wine-like conditions. Small-angle X-ray scattering (SAXS) was therefore used to determine the molecular sizes and corresponding MWs of wine tannin samples from 3 and 7 year old Cabernet Sauvignon wine in a variety of wine-like matrixes: 5-15% and 100% ethanol; 0-200 mM NaCl and pH 3.0-4.0, and compared to those measured using the standard methods. The SAXS results indicated that the tannin samples from the older wine were larger than those of the younger wine and that wine composition did not greatly impact on tannin molecular size. The average tannin MWs as determined by GPC correlated strongly with the SAXS results, suggesting that this method does give a good indication of tannin molecular size in wine-like conditions. The MW as determined from the depolymerization reactions did not correlate as strongly with the SAXS results. To our knowledge, SAXS measurements have not previously been attempted for wine tannins.

  4. Intermediate Pond Sizes Contain the Highest Density, Richness, and Diversity of Pond-Breeding Amphibians

    PubMed Central

    Semlitsch, Raymond D.; Peterman, William E.; Anderson, Thomas L.; Drake, Dana L.; Ousterhout, Brittany H.

    2015-01-01

    We present data on amphibian density, species richness, and diversity from a 7140-ha area consisting of 200 ponds in the Midwestern U.S. that represents most of the possible lentic aquatic breeding habitats common in this region. Our study includes all possible breeding sites with natural and anthropogenic disturbance processes that can be missing from studies where sampling intensity is low, sample area is small, or partial disturbance gradients are sampled. We tested whether pond area was a significant predictor of density, species richness, and diversity of amphibians and if values peaked at intermediate pond areas. We found that in all cases a quadratic model fit our data significantly better than a linear model. Because small ponds have a high probability of pond drying and large ponds have a high probability of fish colonization and accumulation of invertebrate predators, drying and predation may be two mechanisms driving the peak of density and diversity towards intermediate values of pond size. We also found that not all intermediate sized ponds produced many larvae; in fact, some had low amphibian density, richness, and diversity. Further analyses of the subset of ponds represented in the peak of the area distribution showed that fish, hydroperiod, invertebrate density, and canopy are additional factors that drive density, richness and diversity of ponds up or down, when extremely small or large ponds are eliminated. Our results indicate that fishless ponds at intermediate sizes are more diverse, produce more larvae, and have greater potential to recruit juveniles into adult populations of most species sampled. Further, hylid and chorus frogs are found predictably more often in ephemeral ponds whereas bullfrogs, green frogs, and cricket frogs are found most often in permanent ponds with fish. Our data increase understanding of what factors structure and maintain amphibian diversity across large landscapes. PMID:25906355

  5. Synthesis and synchrotron characterisation of novel dual-template of hydroxyapatite scaffolds with controlled size porous distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lima, Thiago A. R. M.; Ilavsky, Jan; Hammons, Joshua

    Hydroxyapatite (HAP) scaffolds with a hierarchical porous architecture were prepared by a new dual-template (corn starch and cetyltrimethylammonium bromide (CTAB) surfactant) used to cast HAP nanoparticles and development scaffolds with size hierarchical porous distribution. The Powder X-Ray diffraction (XRD) results showed that only the HAP crystalline phase is present in the samples after calcination; the Scanning Electron Microscopy (SEM) combined with Small Angle (SAXS) and Ultra-Small Angle X-ray Scattering (USAXS) techniques showed that the porous arrangement is promoted by needle-like HAP nanoparticles, and that the pore size distributions depend on the drip-order of the calcium and the phosphate solutions duringmore » the template preparation stage.« less

  6. Re-electrospraying splash-landed proteins and nanoparticles.

    PubMed

    Benner, W Henry; Lewis, Gregory S; Hering, Susanne V; Selgelke, Brent; Corzett, Michelle; Evans, James E; Lightstone, Felice C

    2012-03-06

    FITC-albumin, Lsr-F, or fluorescent polystyrene latex particles were electrosprayed from aqueous buffer and subjected to dispersion by differential electrical mobility at atmospheric pressure. A resulting narrow size cut of singly charged molecular ions or particles was passed through a condensation growth tube collector to create a flow stream of small water droplets, each carrying a single ion or particle. The droplets were splash landed (impacted) onto a solid or liquid temperature controlled surface. Small pools of droplets containing size-selected particles, FITC-albumin, or Lsr-F were recovered, re-electrosprayed, and, when analyzed a second time by differential electrical mobility, showed increased homogeneity. Transmission electron microscopy (TEM) analysis of the size-selected Lsr-F sample corroborated the mobility observation.

  7. Thermal conductivity measurements of particulate materials: 3. Natural samples and mixtures of particle sizes

    NASA Astrophysics Data System (ADS)

    Presley, Marsha A.; Craddock, Robert A.

    2006-09-01

    A line-heat source apparatus was used to measure thermal conductivities of natural fluvial and eolian particulate sediments under low pressures of a carbon dioxide atmosphere. These measurements were compared to a previous compilation of the dependence of thermal conductivity on particle size to determine a thermal conductivity-derived particle size for each sample. Actual particle-size distributions were determined via physical separation through brass sieves. Comparison of the two analyses indicates that the thermal conductivity reflects the larger particles within the samples. In each sample at least 85-95% of the particles by weight are smaller than or equal to the thermal conductivity-derived particle size. At atmospheric pressures less than about 2-3 torr, samples that contain a large amount of small particles (<=125 μm or 4 Φ) exhibit lower thermal conductivities relative to those for the larger particles within the sample. Nonetheless, 90% of the sample by weight still consists of particles that are smaller than or equal to this lower thermal conductivity-derived particle size. These results allow further refinement in the interpretation of geomorphologic processes acting on the Martian surface. High-energy fluvial environments should produce poorer-sorted and coarser-grained deposits than lower energy eolian environments. Hence these results will provide additional information that may help identify coarser-grained fluvial deposits and may help differentiate whether channel dunes are original fluvial sediments that are at most reworked by wind or whether they represent a later overprint of sediment with a separate origin.

  8. Measuring helium bubble diameter distributions in tungsten with grazing incidence small angle x-ray scattering (GISAXS)

    NASA Astrophysics Data System (ADS)

    Thompson, M.; Kluth, P.; Doerner, R. P.; Kirby, N.; Riley, D.; Corr, C. S.

    2016-02-01

    Grazing incidence small angle x-ray scattering was performed on tungsten samples exposed to helium plasma in the MAGPIE and Pisces-A linear plasma devices to measure the size distributions of resulting helium nano-bubbles. Nano-bubbles were fitted assuming spheroidal particles and an exponential diameter distribution. These particles had mean diameters between 0.36 and 0.62 nm. Pisces-A exposed samples showed more complex patterns, which may suggest the formation of faceted nano-bubbles or nano-scale surface structures.

  9. Effect of Silica Particle Size on Texture, Structure, and Catalytic Performance of Cu/SiO2 Catalysts for Glycerol Hydrogenolysis

    NASA Astrophysics Data System (ADS)

    Qi, Ye Tong; Zhe, Chen Hong; Ning, Xiang

    2018-03-01

    The influences of carrier particle sizes of Cu/SiO2 catalysts for hydrogenolysis of glycerol were studied use mono-dispersed silica as models. Catalysts were prepared by precipitation method with the average size of the mono-dispersed silica supports varying of 10, 20, and 90 nm. Characterization of the catalysts show that the physical properties such as pore volume and BET surface area of the catalysts were largely affected by the carrier particle size of silica. However, the copper dispersion of the three samples were similar. XPS patterns show a difference in the chemical states of copper species, small carrier particle size induced formation of copper phyllosilicate, which benefits on the stability of copper species in reaction. The overall activity in the reaction of glycerol hydrogenolysis shows a correlation with the carrier particle size. The small carrier particles prevent the copper species from aggregation thus such catalysts exhibit good catalytic activity and stability.

  10. A Systematic Review of the Relationship between Familism and Mental Health Outcomes in Latino Population

    PubMed Central

    Valdivieso-Mora, Esmeralda; Peet, Casie L.; Garnier-Villarreal, Mauricio; Salazar-Villanea, Monica; Johnson, David K.

    2016-01-01

    Background: Familismo or familism is a cultural value frequently seen in Hispanic cultures, in which a higher emphasis is placed on the family unit in terms of respect, support, obligation, and reference. Familism has been implicated as a protective factor against mental health problems and may foster the growth and development of children. This study aims at measuring the size of the relationship between familism and mental health outcomes of depression, suicide, substance abuse, internalizing, and externalizing behaviors. Methods: Thirty-nine studies were systematically reviewed to assess the relationship between familism and mental health outcomes. Data from the studies were comprised and organized into five categories: depression, suicide, internalizing symptoms, externalizing symptoms, and substance use. The Cohen's d of each value (dependent variable in comparison to familism) was calculated. Results were weighted based on sample sizes (n) and total effect sizes were then calculated. It was hypothesized that there would be a large effect size in the relationship between familism and depression, suicide, internalizing, and externalizing symptoms and substance use in Hispanics. Results: The meta-analysis showed small effect sizes in the relationship between familism and depression, suicide and internalizing behaviors. And no significant effects for substance abuse and externalizing behaviors. Discussion: The small effects found in this study may be explained by the presence of moderator variables between familism and mental health outcomes (e.g., communication within the family). In addition, variability in the Latino samples and in the measurements used might explain the small and non-significant effects found. PMID:27826269

  11. Influence of field size on the physiological and skill demands of small-sided games in junior and senior rugby league players.

    PubMed

    Gabbett, Tim J; Abernethy, Bruce; Jenkins, David G

    2012-02-01

    The purpose of this study was to investigate the effect of changes in field size on the physiological and skill demands of small-sided games in elite junior and senior rugby league players. Sixteen elite senior rugby league players ([mean ± SE] age, 23.6 ± 0.5 years) and 16 elite junior rugby league players ([mean ± SE] age, 17.3 ± 0.3 years) participated in this study. On day 1, 2 teams played an 8-minute small-sided game on a small field (10-m width × 40-m length), whereas the remaining 2 teams played the small-sided game on a larger sized field (40-m width × 70-m length). On day 2, the groups were crossed over. Movement was recorded by a global positioning system unit sampling at 5 Hz. Games were filmed to count the number of possessions and the number and quality of disposals. The games played on a larger field resulted in a greater (p < 0.05) total distance covered, and distances covered in moderate, high, and very-high velocity movement intensities. Senior players covered more distance at moderate, high, and very-high intensities, and less distance at low and very-low intensities during small-sided games than junior players. Although increasing field size had no significant influence (p > 0.05) over the duration of recovery periods for junior players, larger field size significantly reduced (p < 0.05) the amount of short-, moderate-, and long-duration recovery periods in senior players. No significant between-group differences (p > 0.05) were detected for games played on a small or large field for the number or quality of skill involvements. These results suggest that increases in field size serve to increase the physiological demands of small-sided games but have minimal influence over the volume or quality of skill executions in elite rugby league players.

  12. Dry particle generation with a 3-D printed fluidized bed generator

    DOE PAGES

    Roesch, Michael; Roesch, Carolin; Cziczo, Daniel J.

    2017-06-02

    We describe the design and testing of PRIZE (PRinted fluidIZed bed gEnerator), a compact fluidized bed aerosol generator manufactured using stereolithography (SLA) printing. Dispersing small quantities of powdered materials – due to either rarity or expense – is challenging due to a lack of small, low-cost dry aerosol generators. With this as motivation, we designed and built a generator that uses a mineral dust or other dry powder sample mixed with bronze beads that sit atop a porous screen. A particle-free airflow is introduced, dispersing the sample as airborne particles. The total particle number concentrations and size distributions were measured duringmore » different stages of the assembling process to show that the SLA 3-D printed generator did not generate particles until the mineral dust sample was introduced. Furthermore, time-series measurements with Arizona Test Dust (ATD) showed stable total particle number concentrations of 10–150 cm -3, depending on the sample mass, from the sub- to super-micrometer size range. Additional tests with collected soil dust samples are also presented. PRIZE is simple to assemble, easy to clean, inexpensive and deployable for laboratory and field studies that require dry particle generation.« less

  13. Dry particle generation with a 3-D printed fluidized bed generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roesch, Michael; Roesch, Carolin; Cziczo, Daniel J.

    We describe the design and testing of PRIZE (PRinted fluidIZed bed gEnerator), a compact fluidized bed aerosol generator manufactured using stereolithography (SLA) printing. Dispersing small quantities of powdered materials – due to either rarity or expense – is challenging due to a lack of small, low-cost dry aerosol generators. With this as motivation, we designed and built a generator that uses a mineral dust or other dry powder sample mixed with bronze beads that sit atop a porous screen. A particle-free airflow is introduced, dispersing the sample as airborne particles. The total particle number concentrations and size distributions were measured duringmore » different stages of the assembling process to show that the SLA 3-D printed generator did not generate particles until the mineral dust sample was introduced. Furthermore, time-series measurements with Arizona Test Dust (ATD) showed stable total particle number concentrations of 10–150 cm -3, depending on the sample mass, from the sub- to super-micrometer size range. Additional tests with collected soil dust samples are also presented. PRIZE is simple to assemble, easy to clean, inexpensive and deployable for laboratory and field studies that require dry particle generation.« less

  14. An opportunity cost approach to sample size calculation in cost-effectiveness analysis.

    PubMed

    Gafni, A; Walter, S D; Birch, S; Sendi, P

    2008-01-01

    The inclusion of economic evaluations as part of clinical trials has led to concerns about the adequacy of trial sample size to support such analysis. The analytical tool of cost-effectiveness analysis is the incremental cost-effectiveness ratio (ICER), which is compared with a threshold value (lambda) as a method to determine the efficiency of a health-care intervention. Accordingly, many of the methods suggested to calculating the sample size requirements for the economic component of clinical trials are based on the properties of the ICER. However, use of the ICER and a threshold value as a basis for determining efficiency has been shown to be inconsistent with the economic concept of opportunity cost. As a result, the validity of the ICER-based approaches to sample size calculations can be challenged. Alternative methods for determining improvements in efficiency have been presented in the literature that does not depend upon ICER values. In this paper, we develop an opportunity cost approach to calculating sample size for economic evaluations alongside clinical trials, and illustrate the approach using a numerical example. We compare the sample size requirement of the opportunity cost method with the ICER threshold method. In general, either method may yield the larger required sample size. However, the opportunity cost approach, although simple to use, has additional data requirements. We believe that the additional data requirements represent a small price to pay for being able to perform an analysis consistent with both concept of opportunity cost and the problem faced by decision makers. Copyright (c) 2007 John Wiley & Sons, Ltd.

  15. Blinded and unblinded internal pilot study designs for clinical trials with count data.

    PubMed

    Schneider, Simon; Schmidli, Heinz; Friede, Tim

    2013-07-01

    Internal pilot studies are a popular design feature to address uncertainties in the sample size calculations caused by vague information on nuisance parameters. Despite their popularity, only very recently blinded sample size reestimation procedures for trials with count data were proposed and their properties systematically investigated. Although blinded procedures are favored by regulatory authorities, practical application is somewhat limited by fears that blinded procedures are prone to bias if the treatment effect was misspecified in the planning. Here, we compare unblinded and blinded procedures with respect to bias, error rates, and sample size distribution. We find that both procedures maintain the desired power and that the unblinded procedure is slightly liberal whereas the actual significance level of the blinded procedure is close to the nominal level. Furthermore, we show that in situations where uncertainty about the assumed treatment effect exists, the blinded estimator of the control event rate is biased in contrast to the unblinded estimator, which results in differences in mean sample sizes in favor of the unblinded procedure. However, these differences are rather small compared to the deviations of the mean sample sizes from the sample size required to detect the true, but unknown effect. We demonstrate that the variation of the sample size resulting from the blinded procedure is in many practically relevant situations considerably smaller than the one of the unblinded procedures. The methods are extended to overdispersed counts using a quasi-likelihood approach and are illustrated by trials in relapsing multiple sclerosis. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Body Size Correlates with Fertilization Success but not Gonad Size in Grass Goby Territorial Males

    PubMed Central

    Pujolar, Jose Martin; Locatello, Lisa; Zane, Lorenzo; Mazzoldi, Carlotta

    2012-01-01

    In fish species with alternative male mating tactics, sperm competition typically occurs when small males that are unsuccessful in direct contests steal fertilization opportunities from large dominant males. In the grass goby Zosterisessor ophiocephalus, large territorial males defend and court females from nest sites, while small sneaker males obtain matings by sneaking into nests. Parentage assignment of 688 eggs from 8 different nests sampled in the 2003–2004 breeding season revealed a high level of sperm competition. Fertilization success of territorial males was very high but in all nests sneakers also contributed to the progeny. In territorial males, fertilization success correlated positively with male body size. Gonadal investment was explored in a sample of 126 grass gobies collected during the period 1995–1996 in the same area (61 territorial males and 65 sneakers). Correlation between body weight and testis weight was positive and significant for sneaker males, while correlation was virtually equal to zero in territorial males. That body size in territorial males is correlated with fertilization success but not gonad size suggests that males allocate much more energy into growth and relatively little into sperm production once the needed size to become territorial is attained. The increased paternity of larger territorial males might be due to a more effective defense of the nest in comparison with smaller territorial males. PMID:23056415

  17. Body size correlates with fertilization success but not gonad size in grass goby territorial males.

    PubMed

    Pujolar, Jose Martin; Locatello, Lisa; Zane, Lorenzo; Mazzoldi, Carlotta

    2012-01-01

    In fish species with alternative male mating tactics, sperm competition typically occurs when small males that are unsuccessful in direct contests steal fertilization opportunities from large dominant males. In the grass goby Zosterisessor ophiocephalus, large territorial males defend and court females from nest sites, while small sneaker males obtain matings by sneaking into nests. Parentage assignment of 688 eggs from 8 different nests sampled in the 2003-2004 breeding season revealed a high level of sperm competition. Fertilization success of territorial males was very high but in all nests sneakers also contributed to the progeny. In territorial males, fertilization success correlated positively with male body size. Gonadal investment was explored in a sample of 126 grass gobies collected during the period 1995-1996 in the same area (61 territorial males and 65 sneakers). Correlation between body weight and testis weight was positive and significant for sneaker males, while correlation was virtually equal to zero in territorial males. That body size in territorial males is correlated with fertilization success but not gonad size suggests that males allocate much more energy into growth and relatively little into sperm production once the needed size to become territorial is attained. The increased paternity of larger territorial males might be due to a more effective defense of the nest in comparison with smaller territorial males.

  18. Characterization of Extracellular Vesicles by Size-Exclusion High-Performance Liquid Chromatography (HPLC).

    PubMed

    Huang, Tao; He, Jiang

    2017-01-01

    Extracellular vesicles (EVs) have recently attracted substantial attention due to the potential diagnostic and therapeutic relevance. Although a variety of techniques have been used to isolate and analyze EVs, it is still far away from satisfaction. Size-exclusion chromatography (SEC), which separates subjects by size, has been widely applied in protein purification and analysis. The purpose of this chapter is to show the applications of size-exclusion high-performance liquid chromatography (HPLC) as methods for EV characterization of impurities or contaminants of small size, and thus for quality assay for the purity of the samples of EVs.

  19. Model of Tooth Morphogenesis Predicts Carabelli Cusp Expression, Size, and Symmetry in Humans

    PubMed Central

    Hunter, John P.; Guatelli-Steinberg, Debbie; Weston, Theresia C.; Durner, Ryan; Betsinger, Tracy K.

    2010-01-01

    Background The patterning cascade model of tooth morphogenesis accounts for shape development through the interaction of a small number of genes. In the model, gene expression both directs development and is controlled by the shape of developing teeth. Enamel knots (zones of nonproliferating epithelium) mark the future sites of cusps. In order to form, a new enamel knot must escape the inhibitory fields surrounding other enamel knots before crown components become spatially fixed as morphogenesis ceases. Because cusp location on a fully formed tooth reflects enamel knot placement and tooth size is limited by the cessation of morphogenesis, the model predicts that cusp expression varies with intercusp spacing relative to tooth size. Although previous studies in humans have supported the model's implications, here we directly test the model's predictions for the expression, size, and symmetry of Carabelli cusp, a variation present in many human populations. Methodology/Principal Findings In a dental cast sample of upper first molars (M1s) (187 rights, 189 lefts, and 185 antimeric pairs), we measured tooth area and intercusp distances with a Hirox digital microscope. We assessed Carabelli expression quantitatively as an area in a subsample and qualitatively using two typological schemes in the full sample. As predicted, low relative intercusp distance is associated with Carabelli expression in both right and left samples using either qualitative or quantitative measures. Furthermore, asymmetry in Carabelli area is associated with asymmetry in relative intercusp spacing. Conclusions/Significance These findings support the model's predictions for Carabelli cusp expression both across and within individuals. By comparing right-left pairs of the same individual, our data show that small variations in developmental timing or spacing of enamel knots can influence cusp pattern independently of genotype. Our findings suggest that during evolution new cusps may first appear as a result of small changes in the spacing of enamel knots relative to crown size. PMID:20689576

  20. Spatial scale and sampling resolution affect measures of gap disturbance in a lowland tropical forest: implications for understanding forest regeneration and carbon storage.

    PubMed

    Lobo, Elena; Dalling, James W

    2014-03-07

    Treefall gaps play an important role in tropical forest dynamics and in determining above-ground biomass (AGB). However, our understanding of gap disturbance regimes is largely based either on surveys of forest plots that are small relative to spatial variation in gap disturbance, or on satellite imagery, which cannot accurately detect small gaps. We used high-resolution light detection and ranging data from a 1500 ha forest in Panama to: (i) determine how gap disturbance parameters are influenced by study area size, and the criteria used to define gaps; and (ii) to evaluate how accurately previous ground-based canopy height sampling can determine the size and location of gaps. We found that plot-scale disturbance parameters frequently differed significantly from those measured at the landscape-level, and that canopy height thresholds used to define gaps strongly influenced the gap-size distribution, an important metric influencing AGB. Furthermore, simulated ground surveys of canopy height frequently misrepresented the true location of gaps, which may affect conclusions about how relatively small canopy gaps affect successional processes and contribute to the maintenance of diversity. Across site comparisons need to consider how gap definition, scale and spatial resolution affect characterizations of gap disturbance, and its inferred importance for carbon storage and community composition.

  1. Interrelationships among Grain Size, Surface Composition, Air Stability, and Interfacial Resistance of Al-Substituted Li7La3Zr2O12 Solid Electrolytes.

    PubMed

    Cheng, Lei; Wu, Cheng Hao; Jarry, Angelique; Chen, Wei; Ye, Yifan; Zhu, Junfa; Kostecki, Robert; Persson, Kristin; Guo, Jinghua; Salmeron, Miquel; Chen, Guoying; Doeff, Marca

    2015-08-19

    The interfacial resistances of symmetrical lithium cells containing Al-substituted Li7La3Zr2O12 (LLZO) solid electrolytes are sensitive to their microstructures and histories of exposure to air. Air exposure of LLZO samples with large grain sizes (∼150 μm) results in dramatically increased interfacial impedances in cells containing them, compared to those with pristine large-grained samples. In contrast, a much smaller difference is seen between cells with small-grained (∼20 μm) pristine and air-exposed LLZO samples. A combination of soft X-ray absorption (sXAS) and Raman spectroscopy, with probing depths ranging from nanometer to micrometer scales, revealed that the small-grained LLZO pellets are more air-stable than large-grained ones, forming far less surface Li2CO3 under both short- and long-term exposure conditions. Surface sensitive X-ray photoelectron spectroscopy (XPS) indicates that the better chemical stability of the small-grained LLZO is related to differences in the distribution of Al and Li at sample surfaces. Density functional theory calculations show that LLZO can react via two different pathways to form Li2CO3. The first, more rapid, pathway involves a reaction with moisture in air to form LiOH, which subsequently absorbs CO2 to form Li2CO3. The second, slower, pathway involves direct reaction with CO2 and is favored when surface lithium contents are lower, as with the small-grained samples. These observations have important implications for the operation of solid-state lithium batteries containing LLZO because the results suggest that the interfacial impedances of these devices is critically dependent upon specific characteristics of the solid electrolyte and how it is prepared.

  2. A Novel Videography Method for Generating Crack-Extension Resistance Curves in Small Bone Samples

    PubMed Central

    Katsamenis, Orestis L.; Jenkins, Thomas; Quinci, Federico; Michopoulou, Sofia; Sinclair, Ian; Thurner, Philipp J.

    2013-01-01

    Assessment of bone quality is an emerging solution for quantifying the effects of bone pathology or treatment. Perhaps one of the most important parameters characterising bone quality is the toughness behaviour of bone. Particularly, fracture toughness, is becoming a popular means for evaluating bone quality. The method is moving from a single value approach that models bone as a linear-elastic material (using the stress intensity factor, K) towards full crack extension resistance curves (R-curves) using a non-linear model (the strain energy release rate in J-R curves). However, for explanted human bone or small animal bones, there are difficulties in measuring crack-extension resistance curves due to size constraints at the millimetre and sub-millimetre scale. This research proposes a novel “whitening front tracking” method that uses videography to generate full fracture resistance curves in small bone samples where crack propagation cannot typically be observed. Here we present this method on sharp edge notched samples (<1 mm×1 mm×Length) prepared from four human femora tested in three-point bending. Each sample was loaded in a mechanical tester with the crack propagation recorded using videography and analysed using an algorithm to track the whitening (damage) zone. Using the “whitening front tracking” method, full R-curves and J-R curves could be generated for these samples. The curves for this antiplane longitudinal orientation were similar to those found in the literature, being between the published longitudinal and transverse orientations. The proposed technique shows the ability to generate full “crack” extension resistance curves by tracking the whitening front propagation to overcome the small size limitations and the single value approach. PMID:23405186

  3. [Study on anemia and vitamin A and vitamin D nutritional status of Chinese urban pregnant women in 2010-2012].

    PubMed

    Hu, Y C; Chen, J; Li, M; Wang, R; Li, W D; Yang, Y H; Yang, C; Yun, C F; Yang, L C; Yang, X G

    2017-02-06

    Objective: To evaluate the prevalence of anemia and the nutritional status of vitamins A and D by analyzing hemoglobin, serum retinol, and serum 25-hydroxyvitamin D levels in Chinese urban pregnant women during 2010-2012. Methods: Data were obtained from the China Nutrition and Health Survey in 2010-2012. Using multi-stage stratified sampling and population proportional stratified random sampling, 2 250 pregnant women from 34 metropolis and 41 middle-sized and small cities were included in this study. Information was collected using a questionnaire survey. The blood hemoglobin concentration was determined using the cyanmethemoglobin method, and anemia was determined using the World Health Organization guidelines combined with the elevation correction standard. The serum retinol level was determined using high-performance liquid chromatography, and vitamin A deficiency (VAD) was judged by the related standard recommended by the World Health Organization. The vitamin D level was determined using enzyme-linked immunosorbent assay and vitamin D deficiency was judged by the recommendation standards from the Institute of Medicine of The National Academies. The hemoglobin, serum retinol, and serum 25-hydroxyvitamin D levels were compared, along with differences in the prevalence of anemia, VAD, and the vitamin D deficiency rate (including deficiency and serious deficiency). Results: A total of 1 738 cases of hemoglobin level, 594 cases of serum retinol level, and 1 027 cases of serum 25-hydroxyvitamin D were available for analysis in this study. The overall blood hemoglobin level ( P (50) ( P (25)- P (75))) was 122.70 (114.00-131.10) g/L; 123.70 (115.21-132.00) g/L for metropolis and 122.01 (113.30-130.40) g/L for middle-sized and small cities. The blood hemoglobin level of metropolis residents was significantly higher than that of middle-sized and small city residents ( P= 0.027). The overall prevalence of anemia was 17.0% (295/1 738). The overall serum retinol level ( P (50) ( P (25)- P (75))) was 1.61 (1.20-2.06) μmol/L; 1.50 (1.04-2.06) μmol/L for metropolis and 1.63 (1.31-2.05) μmol/L for middle-sized and small cities. The serum retinol level of metropolis residents was significantly higher than that of middle-sized and small city residents ( P= 0.033). The overall prevalence of VAD was 7.4% (47/639); 11.5% (33/286) for metropolis and 4.0% (14/353) for middle-sized and small cities. A significant difference was observed in the prevalence of VAD between metropolis and middle-sized and small city residents ( P< 0.001). The overall serum 25-hydroxyvitamin D level ( P (50) ( P (25)- P (75))) was 15.41 (11.79-20.23) ng/ml; 14.71 (11.15-19.07) ng/ml for metropolis and 16.02 (12.65-21.36) ng/ml for middle-sized and small cities. A significant difference was observed in the vitamin D level between metropolis and middle-sized and small city residents ( P< 0.001). The overall prevalence of vitamin D deficiency was 74.3% (763/1 027); A significant difference was observed in the prevalence of serious vitamin D deficiency between metropolis (30.64%(144/470)) and middle-sized and small city residents (26%(267/1 027))( P= 0.002). There were no significant differences between blood hemoglobin level and the prevalence of anemia, VAD, and vitamin D deficiency. Conclusion: The prevalence of anemia in Chinese urban pregnant women improved from 2002 to 2012. The prevalence of vitamin D deficiency in pregnant women was generally more serious, while a certain percentage of women had VAD. The prevalence of VAD and serious vitamin D deficiency among pregnant women from metropolis was significantly higher than that of pregnant women from medium and small-sized cities.

  4. Noninvasive genetics provides insights into the population size and genetic diversity of an Amur tiger population in China.

    PubMed

    Wang, Dan; Hu, Yibo; Ma, Tianxiao; Nie, Yonggang; Xie, Yan; Wei, Fuwen

    2016-01-01

    Understanding population size and genetic diversity is critical for effective conservation of endangered species. The Amur tiger (Panthera tigris altaica) is the largest felid and a flagship species for wildlife conservation. Due to habitat loss and human activities, available habitat and population size are continuously shrinking. However, little is known about the true population size and genetic diversity of wild tiger populations in China. In this study, we collected 55 fecal samples and 1 hair sample to investigate the population size and genetic diversity of wild Amur tigers in Hunchun National Nature Reserve, Jilin Province, China. From the samples, we determined that 23 fecal samples and 1 hair sample were from 7 Amur tigers: 2 males, 4 females and 1 individual of unknown sex. Interestingly, 2 fecal samples that were presumed to be from tigers were from Amur leopards, highlighting the significant advantages of noninvasive genetics over traditional methods in studying rare and elusive animals. Analyses from this sample suggested that the genetic diversity of wild Amur tigers is much lower than that of Bengal tigers, consistent with previous findings. Furthermore, the genetic diversity of this Hunchun population in China was lower than that of the adjoining subpopulation in southwest Primorye Russia, likely due to sampling bias. Considering the small population size and relatively low genetic diversity, it is urgent to protect this endangered local subpopulation in China. © 2015 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  5. Advanced Experimental Methods for Low-temperature Magnetotransport Measurement of Novel Materials

    PubMed Central

    Hagmann, Joseph A.; Le, Son T.; Richter, Curt A.; Seiler, David G.

    2016-01-01

    Novel electronic materials are often produced for the first time by synthesis processes that yield bulk crystals (in contrast to single crystal thin film synthesis) for the purpose of exploratory materials research. Certain materials pose a challenge wherein the traditional bulk Hall bar device fabrication method is insufficient to produce a measureable device for sample transport measurement, principally because the single crystal size is too small to attach wire leads to the sample in a Hall bar configuration. This can be, for example, because the first batch of a new material synthesized yields very small single crystals or because flakes of samples of one to very few monolayers are desired. In order to enable rapid characterization of materials that may be carried out in parallel with improvements to their growth methodology, a method of device fabrication for very small samples has been devised to permit the characterization of novel materials as soon as a preliminary batch has been produced. A slight variation of this methodology is applicable to producing devices using exfoliated samples of two-dimensional materials such as graphene, hexagonal boron nitride (hBN), and transition metal dichalcogenides (TMDs), as well as multilayer heterostructures of such materials. Here we present detailed protocols for the experimental device fabrication of fragments and flakes of novel materials with micron-sized dimensions onto substrate and subsequent measurement in a commercial superconducting magnet, dry helium close-cycle cryostat magnetotransport system at temperatures down to 0.300 K and magnetic fields up to 12 T. PMID:26863449

  6. Subattomole sensitivity in biological accelerator mass spectrometry.

    PubMed

    Salehpour, Mehran; Possnert, Göran; Bryhni, Helge

    2008-05-15

    The Uppsala University 5 MV Pelletron tandem accelerator has been used to study (14)C-labeled biological samples utilizing accelerator mass spectrometry (AMS) technology. We have adapted a sample preparation method for small biological samples down to a few tens of micrograms of carbon, involving among others, miniaturizing of the graphitization reactor. Standard AMS requires about 1 mg of carbon with a limit of quantitation of about 10 amol. Results are presented for a range of small sample sizes with concentrations down to below 1 pM of a pharmaceutical substance in human blood. It is shown that (14)C-labeled molecular markers can be routinely measured from the femtomole range down to a few hundred zeptomole (10 (-21) mol), without the use of any additional separation methods.

  7. SMURC: High-Dimension Small-Sample Multivariate Regression With Covariance Estimation.

    PubMed

    Bayar, Belhassen; Bouaynaya, Nidhal; Shterenberg, Roman

    2017-03-01

    We consider a high-dimension low sample-size multivariate regression problem that accounts for correlation of the response variables. The system is underdetermined as there are more parameters than samples. We show that the maximum likelihood approach with covariance estimation is senseless because the likelihood diverges. We subsequently propose a normalization of the likelihood function that guarantees convergence. We call this method small-sample multivariate regression with covariance (SMURC) estimation. We derive an optimization problem and its convex approximation to compute SMURC. Simulation results show that the proposed algorithm outperforms the regularized likelihood estimator with known covariance matrix and the sparse conditional Gaussian graphical model. We also apply SMURC to the inference of the wing-muscle gene network of the Drosophila melanogaster (fruit fly).

  8. How Significant Is a Boxplot Outlier?

    ERIC Educational Resources Information Center

    Dawson, Robert

    2011-01-01

    It is common to consider Tukey's schematic ("full") boxplot as an informal test for the existence of outliers. While the procedure is useful, it should be used with caution, as at least 30% of samples from a normally-distributed population of any size will be flagged as containing an outlier, while for small samples (N less than 10) even extreme…

  9. Female reproductive characteristics of three species in the Orconectes subgenus Trisellescens and comparisons to other Orconectes species

    Treesearch

    S. B. Adams

    2008-01-01

    In streams of Mississippi and southwest Tennessee, Orconectes females with eggs or hatchlings are not commonly encountered while sampling. I report on fecundity, egg size, and aspects of reproductive timing for small samples of female Orconectes chickasawae, Orconectes etnieri, and Orconectes jonesi carrying eggs or hatchlings and...

  10. Developing and refining NIR calibrations for total carbohydrate composition and isoflavones and saponins in ground whole soy meal

    USDA-ARS?s Scientific Manuscript database

    Although many near infrared (NIR) spectrometric calibrations exist for a variety of components in soy, current calibration methods are often limited by either a small sample size on which the calibrations are based or a wide variation in sample preparation and measurement methods, which yields unrel...

  11. Intellectual Abilities in a Large Sample of Children with Velo-Cardio-Facial Syndrome: An Update

    ERIC Educational Resources Information Center

    De Smedt, Bert; Devriendt, K.; Fryns, J. -P.; Vogels, A.; Gewillig, M.; Swillen, A.

    2007-01-01

    Background: Learning disabilities are one of the most consistently reported features in Velo-Cardio-Facial Syndrome (VCFS). Earlier reports on IQ in children with VCFS were, however, limited by small sample sizes and ascertainment biases. The aim of the present study was therefore to replicate these earlier findings and to investigate intellectual…

  12. The Emotions of Socialization-Related Learning: Understanding Workplace Adaptation as a Learning Process.

    ERIC Educational Resources Information Center

    Reio, Thomas G., Jr.

    The influence of selected discrete emotions on socialization-related learning and perception of workplace adaptation was examined in an exploratory study. Data were collected from 233 service workers in 4 small and medium-sized companies in metropolitan Washington, D.C. The sample members' average age was 32.5 years, and the sample's racial makeup…

  13. Model Choice and Sample Size in Item Response Theory Analysis of Aphasia Tests

    ERIC Educational Resources Information Center

    Hula, William D.; Fergadiotis, Gerasimos; Martin, Nadine

    2012-01-01

    Purpose: The purpose of this study was to identify the most appropriate item response theory (IRT) measurement model for aphasia tests requiring 2-choice responses and to determine whether small samples are adequate for estimating such models. Method: Pyramids and Palm Trees (Howard & Patterson, 1992) test data that had been collected from…

  14. Characterization of Raman Scattering in Solid Samples with Different Particle Sizes and Elucidation on the Trends of Particle Size-Dependent Intensity Variations in Relation to Changes in the Sizes of Laser Illumination and Detection Area.

    PubMed

    Duy, Pham K; Chun, Seulah; Chung, Hoeil

    2017-11-21

    We have systematically characterized Raman scatterings in solid samples with different particle sizes and investigated subsequent trends of particle size-induced intensity variations. For this purpose, both lactose powders and pellets composed of five different particle sizes were prepared. Uniquely in this study, three spectral acquisition schemes with different sizes of laser illuminations and detection windows were employed for the evaluation, since it was expected that the experimental configuration would be another factor potentially influencing the intensity of the lactose peak, along with the particle size itself. In both samples, the distribution of Raman photons became broader with the increase in particle size, as the mean free path of laser photons, the average photon travel distance between consecutive scattering locations, became longer under this situation. When the particle size was the same, the Raman photon distribution was narrower in the pellets since the individual particles were more densely packed in a given volume (the shorter mean free path). When the size of the detection window was small, the number of photons reaching the detector decreased as the photon distribution was larger. Meanwhile, a large-window detector was able to collect the widely distributed Raman photons more effectively; therefore, the trends of intensity change with the variation in particle size were dissimilar depending on the employed spectral acquisition schemes. Overall, the Monte Carlo simulation was effective at probing the photon distribution inside the samples and helped to support the experimental observations.

  15. STUDY OF HOME DEMONSTRATION UNITS IN A SAMPLE OF 27 COUNTIES IN NEW YORK STATE, NUMBER 3.

    ERIC Educational Resources Information Center

    ALEXANDER, FRANK D.; HARSHAW, JEAN

    AN EXPLORATORY STUDY EXAMINED CHARACTERISTICS OF 1,128 HOME DEMONSTRATION UNITS TO SUGGEST HYPOTHESES AND SCOPE FOR A MORE INTENSIVE STUDY OF A SMALL SAMPLE OF UNITS, AND TO PROVIDE GUIDANCE IN SAMPLING. DATA WERE OBTAINED FROM A SPECIALLY DESIGNED MEMBERSHIP CARD USED IN 1962. UNIT SIZE AVERAGED 23.6 MEMBERS BUT THE RANGE WAS FAIRLY GREAT. A NEED…

  16. Assessing Stress Responses in Beaked and Sperm Whales in the Bahamas

    DTIC Science & Technology

    2012-09-30

    acceptable extraction efficiency for steroids (Hayward et al. 2010; Wasser et al. 2010). The"small sample size" effect on hormone concentration was...efficiency ( Wasser pers. comm., Hunt et al. unpub. data). 4) Pilot test of hormone content in seawater removed from samples. The large volume of...2006), and Wasser et al. (2010), with extraction modifications discussed above. RESULTS Sample processing Using a consistent fecal:solvent

  17. Sample size determination for a three-arm equivalence trial of Poisson and negative binomial responses.

    PubMed

    Chang, Yu-Wei; Tsong, Yi; Zhao, Zhigen

    2017-01-01

    Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.

  18. Measurement of surface water runoff from plots of two different sizes

    NASA Astrophysics Data System (ADS)

    Joel, Abraham; Messing, Ingmar; Seguel, Oscar; Casanova, Manuel

    2002-05-01

    Intensities and amounts of water infiltration and runoff on sloping land are governed by the rainfall pattern and soil hydraulic conductivity, as well as by the microtopography and soil surface conditions. These components are closely interrelated and occur simultaneously, and their particular contribution may change during a rainfall event, or their effects may vary at different field scales. The scale effect on the process of infiltration/runoff was studied under natural field and rainfall conditions for two plot sizes: small plots of 0·25 m2 and large plots of 50 m2. The measurements were carried out in the central region of Chile in a piedmont most recently used as natural pastureland. Three blocks, each having one large plot and five small plots, were established. Cumulative rainfall and runoff quantities were sampled every 5 min. Significant variations in runoff responses to rainfall rates were found for the two plot sizes. On average, large plots yielded only 40% of runoff quantities produced on small plots per unit area. This difference between plot sizes was observed even during periods of continuous runoff.

  19. Development and Validation of Pathogen Environmental Monitoring Programs for Small Cheese Processing Facilities.

    PubMed

    Beno, Sarah M; Stasiewicz, Matthew J; Andrus, Alexis D; Ralyea, Robert D; Kent, David J; Martin, Nicole H; Wiedmann, Martin; Boor, Kathryn J

    2016-12-01

    Pathogen environmental monitoring programs (EMPs) are essential for food processing facilities of all sizes that produce ready-to-eat food products exposed to the processing environment. We developed, implemented, and evaluated EMPs targeting Listeria spp. and Salmonella in nine small cheese processing facilities, including seven farmstead facilities. Individual EMPs with monthly sample collection protocols were designed specifically for each facility. Salmonella was detected in only one facility, with likely introduction from the adjacent farm indicated by pulsed-field gel electrophoresis data. Listeria spp. were isolated from all nine facilities during routine sampling. The overall Listeria spp. (other than Listeria monocytogenes ) and L. monocytogenes prevalences in the 4,430 environmental samples collected were 6.03 and 1.35%, respectively. Molecular characterization and subtyping data suggested persistence of a given Listeria spp. strain in seven facilities and persistence of L. monocytogenes in four facilities. To assess routine sampling plans, validation sampling for Listeria spp. was performed in seven facilities after at least 6 months of routine sampling. This validation sampling was performed by independent individuals and included collection of 50 to 150 samples per facility, based on statistical sample size calculations. Two of the facilities had a significantly higher frequency of detection of Listeria spp. during the validation sampling than during routine sampling, whereas two other facilities had significantly lower frequencies of detection. This study provides a model for a science- and statistics-based approach to developing and validating pathogen EMPs.

  20. Cognitive and Occupational Function in Survivors of Adolescent Cancer.

    PubMed

    Nugent, Bethany D; Bender, Catherine M; Sereika, Susan M; Tersak, Jean M; Rosenzweig, Margaret

    2018-02-01

    Adolescents with cancer have unique developmental considerations. These include brain development, particularly in the frontal lobe, and a focus on completing education and entering the workforce. Cancer and treatment at this stage may prove to uniquely affect survivors' experience of cognitive and occupational function. An exploratory, cross-sectional, descriptive comparative study was employed to describe cognitive and occupational function in adult survivors of adolescent cancer (diagnosed between the ages of 15 and 21 years) and explore differences in age- and gender-matched controls. In total, 23 survivors and 14 controls participated in the study. While significant differences were not found between the groups on measures of cognitive and occupational function, several small and medium effect sizes were found suggesting that survivors may have greater difficulty than controls. Two small effect sizes were found in measures of neuropsychological performance (the Digit Vigilance test [d = 0.396] and Stroop test [d = 0.226]). Small and medium effect sizes ranging from 0.269 to 0.605 were found for aspects of perceived and total cognitive function. A small effect size was also found in work output (d = 0.367). While we did not find significant differences in cognitive or occupational function between survivors and controls, the effect sizes observed point to the need for future research. Future work using a larger sample size and longitudinal design are needed to further explore cognitive and occupational function in this vulnerable and understudied population and assist in the understanding of patterns of change over time.

  1. Portrait of a small population of boreal toads (Anaxyrus boreas)

    USGS Publications Warehouse

    Muths, Erin; Scherer, Rick D.

    2011-01-01

    Much attention has been given to the conservation of small populations, those that are small because of decline, and those that are naturally small. Small populations are of particular interest because ecological theory suggests that they are vulnerable to the deleterious effects of environmental, demographic, and genetic stochasticity as well as natural and human-induced catastrophes. However, testing theory and developing applicable conservation measures for small populations is hampered by sparse data. This lack of information is frequently driven by computational issues with small data sets that can be confounded by the impacts of stressors. We present estimates of demographic parameters from a small population of Boreal Toads (Anaxyrus boreas) that has been surveyed since 2001 by using capture-recapture methods. Estimates of annual adult survival probability are high relative to other Boreal Toad populations, whereas estimates of recruitment rate are low. Despite using simple models, clear patterns emerged from the analyses, suggesting that population size is constrained by low recruitment of adults and is declining slowly. These patterns provide insights that are useful in developing management directions for this small population, and this study serves as an example of the potential for small populations to yield robust and useful information despite sample size constraints.

  2. Minimization of reflection cracks in flexible pavements.

    DOT National Transportation Integrated Search

    1977-01-01

    This report describes the performance of fabrics used under overlays in an effort to minimize longitudinal and alligator cracking in flexible pavements. It is concluded, although the sample size is small, that the treatments will extend the pavement ...

  3. Context Matters: Volunteer Bias, Small Sample Size, and the Value of Comparison Groups in the Assessment of Research-Based Undergraduate Introductory Biology Lab Courses

    PubMed Central

    Brownell, Sara E.; Kloser, Matthew J.; Fukami, Tadashi; Shavelson, Richard J.

    2013-01-01

    The shift from cookbook to authentic research-based lab courses in undergraduate biology necessitates the need for evaluation and assessment of these novel courses. Although the biology education community has made progress in this area, it is important that we interpret the effectiveness of these courses with caution and remain mindful of inherent limitations to our study designs that may impact internal and external validity. The specific context of a research study can have a dramatic impact on the conclusions. We present a case study of our own three-year investigation of the impact of a research-based introductory lab course, highlighting how volunteer students, a lack of a comparison group, and small sample sizes can be limitations of a study design that can affect the interpretation of the effectiveness of a course. PMID:24358380

  4. Validation of a New Metric for Assessing the Integration of Health Protection and Health Promotion in a Sample of Small- and Medium-Sized Employer Groups.

    PubMed

    Williams, Jessica A R; Nelson, Candace C; Cabán-Martinez, Alberto J; Katz, Jeffrey N; Wagner, Gregory R; Pronk, Nicolaas P; Sorensen, Glorian; McLellan, Deborah L

    2015-09-01

    To conduct validation analyses for a new measure of the integration of worksite health protection and health promotion approaches developed in earlier research. A survey of small- to medium-sized employers located in the United States was conducted between October 2013 and March 2014 (n = 111). Cronbach α coefficient was used to assess reliability, and Pearson correlation coefficients were used to assess convergent validity. The integration score was positively associated with the measures of occupational safety and health and health promotion activities/policies-supporting its convergent validity (Pearson correlation coefficients of 0.32 to 0.47). Cronbach α coefficient was 0.94, indicating excellent reliability. The integration score seems to be a promising tool for assessing integration of health promotion and health protection. Further work is needed to test its dimensionality and validate its use in other samples.

  5. Context matters: volunteer bias, small sample size, and the value of comparison groups in the assessment of research-based undergraduate introductory biology lab courses.

    PubMed

    Brownell, Sara E; Kloser, Matthew J; Fukami, Tadashi; Shavelson, Richard J

    2013-01-01

    The shift from cookbook to authentic research-based lab courses in undergraduate biology necessitates the need for evaluation and assessment of these novel courses. Although the biology education community has made progress in this area, it is important that we interpret the effectiveness of these courses with caution and remain mindful of inherent limitations to our study designs that may impact internal and external validity. The specific context of a research study can have a dramatic impact on the conclusions. We present a case study of our own three-year investigation of the impact of a research-based introductory lab course, highlighting how volunteer students, a lack of a comparison group, and small sample sizes can be limitations of a study design that can affect the interpretation of the effectiveness of a course.

  6. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach

    PubMed Central

    Boitard, Simon; Rodríguez, Willy; Jay, Flora; Mona, Stefano; Austerlitz, Frédéric

    2016-01-01

    Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey), PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles. PMID:26943927

  7. Standardized mean differences cause funnel plot distortion in publication bias assessments.

    PubMed

    Zwetsloot, Peter-Paul; Van Der Naald, Mira; Sena, Emily S; Howells, David W; IntHout, Joanna; De Groot, Joris Ah; Chamuleau, Steven Aj; MacLeod, Malcolm R; Wever, Kimberley E

    2017-09-08

    Meta-analyses are increasingly used for synthesis of evidence from biomedical research, and often include an assessment of publication bias based on visual or analytical detection of asymmetry in funnel plots. We studied the influence of different normalisation approaches, sample size and intervention effects on funnel plot asymmetry, using empirical datasets and illustrative simulations. We found that funnel plots of the Standardized Mean Difference (SMD) plotted against the standard error (SE) are susceptible to distortion, leading to overestimation of the existence and extent of publication bias. Distortion was more severe when the primary studies had a small sample size and when an intervention effect was present. We show that using the Normalised Mean Difference measure as effect size (when possible), or plotting the SMD against a sample size-based precision estimate, are more reliable alternatives. We conclude that funnel plots using the SMD in combination with the SE are unsuitable for publication bias assessments and can lead to false-positive results.

  8. Standardized mean differences cause funnel plot distortion in publication bias assessments

    PubMed Central

    Van Der Naald, Mira; Sena, Emily S; Howells, David W; IntHout, Joanna; De Groot, Joris AH; Chamuleau, Steven AJ; MacLeod, Malcolm R

    2017-01-01

    Meta-analyses are increasingly used for synthesis of evidence from biomedical research, and often include an assessment of publication bias based on visual or analytical detection of asymmetry in funnel plots. We studied the influence of different normalisation approaches, sample size and intervention effects on funnel plot asymmetry, using empirical datasets and illustrative simulations. We found that funnel plots of the Standardized Mean Difference (SMD) plotted against the standard error (SE) are susceptible to distortion, leading to overestimation of the existence and extent of publication bias. Distortion was more severe when the primary studies had a small sample size and when an intervention effect was present. We show that using the Normalised Mean Difference measure as effect size (when possible), or plotting the SMD against a sample size-based precision estimate, are more reliable alternatives. We conclude that funnel plots using the SMD in combination with the SE are unsuitable for publication bias assessments and can lead to false-positive results. PMID:28884685

  9. Testing of commonly used mixing and sampling procedures to evaluate fertilizer blends prepared with matched and mismatched particle sizes.

    PubMed

    Hall, William L; Ramsey, Charles; Falls, J Harold

    2014-01-01

    Bulk blending of dry fertilizers is a common practice in the United States and around the world. This practice involves the mixing (either physically or volumetrically) of concentrated, high analysis raw materials. Blending is followed by bagging (for small volume application such as lawn and garden products), loading into truck transports, and spreading. The great majority of bulk blended products are not bagged but handled in bulk and transferred from the blender to a holding hopper. The product is then transferred to a transport vehicle, which may, or may not, also be a spreader. If the primary transport vehicle is not a spreader, then there is another transfer at the user site to a spreader for application. Segregation of materials that are mismatched due to size, density, or shape is an issue when attempting to effectively sample or evenly spread bulk blended products. This study, prepared in coordination with and supported by the Florida Department of Agriculture and Consumer Services and the Florida Fertilizer and Agrochemical Association, looks at the impact of varying particle size as it relates to blending, sampling, and application of bulk blends. The study addresses blends containing high ratios of N-P-K materials and varying (often small) quantities of the micronutrient Zn.

  10. Crospovidone interactions with water. II. Dynamic vapor sorption analysis of the effect of Polyplasdone particle size on its uptake and distribution of water.

    PubMed

    Saripella, Kalyan K; Mallipeddi, Rama; Neau, Steven H

    2014-11-20

    Polyplasdone of different particle size was used to study the sorption, desorption, and distribution of water, and to seek evidence that larger particles can internalize water. The three samples were Polyplasdone® XL, XL-10, and INF-10. Moisture sorption and desorption isotherms at 25 °C at 5% intervals from 0 to 95% relative humidity (RH) were generated by dynamic vapor sorption analysis. The three products provided similar data, judged to be Type III with a small hysteresis that appears when RH is below 65%. An absent rounded knee in the sorption curve suggests that multilayers form before the monolayer is completed. The hysteresis indicates that internally absorbed moisture is trapped as the water is desorbed and the polymer sample shrinks, thus requiring a lower level of RH to continue desorption. The fit of the Guggenheim-Anderson-de Boer (GAB) and the Young and Nelson equations was accomplished in the data analysis. The W(m), C(G), and K values from GAB analysis are similar across the three samples, revealing 0.962 water molecules per repeating unit in the monolayer. A small amount of absorbed water is identified, but this is consistent across the three particle sizes. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. The optimal design of stepped wedge trials with equal allocation to sequences and a comparison to other trial designs.

    PubMed

    Thompson, Jennifer A; Fielding, Katherine; Hargreaves, James; Copas, Andrew

    2017-12-01

    Background/Aims We sought to optimise the design of stepped wedge trials with an equal allocation of clusters to sequences and explored sample size comparisons with alternative trial designs. Methods We developed a new expression for the design effect for a stepped wedge trial, assuming that observations are equally correlated within clusters and an equal number of observations in each period between sequences switching to the intervention. We minimised the design effect with respect to (1) the fraction of observations before the first and after the final sequence switches (the periods with all clusters in the control or intervention condition, respectively) and (2) the number of sequences. We compared the design effect of this optimised stepped wedge trial to the design effects of a parallel cluster-randomised trial, a cluster-randomised trial with baseline observations, and a hybrid trial design (a mixture of cluster-randomised trial and stepped wedge trial) with the same total cluster size for all designs. Results We found that a stepped wedge trial with an equal allocation to sequences is optimised by obtaining all observations after the first sequence switches and before the final sequence switches to the intervention; this means that the first sequence remains in the control condition and the last sequence remains in the intervention condition for the duration of the trial. With this design, the optimal number of sequences is [Formula: see text], where [Formula: see text] is the cluster-mean correlation, [Formula: see text] is the intracluster correlation coefficient, and m is the total cluster size. The optimal number of sequences is small when the intracluster correlation coefficient and cluster size are small and large when the intracluster correlation coefficient or cluster size is large. A cluster-randomised trial remains more efficient than the optimised stepped wedge trial when the intracluster correlation coefficient or cluster size is small. A cluster-randomised trial with baseline observations always requires a larger sample size than the optimised stepped wedge trial. The hybrid design can always give an equally or more efficient design, but will be at most 5% more efficient. We provide a strategy for selecting a design if the optimal number of sequences is unfeasible. For a non-optimal number of sequences, the sample size may be reduced by allowing a proportion of observations before the first or after the final sequence has switched. Conclusion The standard stepped wedge trial is inefficient. To reduce sample sizes when a hybrid design is unfeasible, stepped wedge trial designs should have no observations before the first sequence switches or after the final sequence switches.

  12. Longitudinal Monitoring of Successive Commercial Layer Flocks for Salmonella enterica Serovar Enteritidis.

    PubMed

    Denagamage, Thomas N; Patterson, Paul; Wallner-Pendleton, Eva; Trampel, Darrell; Shariat, Nikki; Dudley, Edward G; Jayarao, Bhushan M; Kariyawasam, Subhashinie

    2016-11-01

    The Pennsylvania Egg Quality Assurance Program (EQAP) provided the framework for Salmonella Enteritidis (SE) control programs, including the Food and Drug Administration (FDA) mandated Final Egg Rule, for commercial layer facilities throughout the United States. Although flocks with ≥3000 birds must comply with the FDA Final Egg Rule, smaller flocks are exempted from the rule. As a result, eggs produced by small layer flocks may pose a greater public health risk than those from larger flocks. It is also unknown if the EQAPs developed with large flocks in mind are suitable for small- and medium-sized flocks. Therefore, a study was performed to evaluate the effectiveness of best management practices included in EQAPs in reducing SE contamination of small- and medium-sized flocks by longitudinal monitoring of their environment and eggs. A total of 59 medium-sized (3000 to 50,000 birds) and small-sized (<3000 birds) flocks from two major layer production states of the United States were enrolled and monitored for SE by culturing different types of environmental samples and shell eggs for two consecutive flock cycles. Isolated SE was characterized by phage typing, pulsed-field gel electrophoresis (PFGE), and clustered regularly interspaced short palindromic repeats-multi-virulence-locus sequence typing (CRISPR-MVLST). Fifty-four Salmonella isolates belonging to 17 serovars, 22 of which were SE, were isolated from multiple sample types. Typing revealed that SE isolates belonged to three phage types (PTs), three PFGE fingerprint patterns, and three CRISPR-MVLST SE Sequence Types (ESTs). The PT8 and JEGX01.0004 PFGE pattern, the most predominant SE types associated with foodborne illness in the United States, were represented by a majority (91%) of SE. Of the three ESTs observed, 85% SE were typed as EST4. The proportion of SE-positive hen house environment during flock cycle 2 was significantly less than the flock cycle 1, demonstrating that current EQAP practices were effective in reducing SE contamination of medium and small layer flocks.

  13. [Monitoring microbiological safety of small systems of water distribution. Comparison of two sampling programs in a town in central Italy].

    PubMed

    Papini, Paolo; Faustini, Annunziata; Manganello, Rosa; Borzacchi, Giancarlo; Spera, Domenico; Perucci, Carlo A

    2005-01-01

    To determine the frequency of sampling in small water distribution systems (<5,000 inhabitants) and compare the results according to different hypotheses in bacteria distribution. We carried out two sampling programs to monitor the water distribution system in a town in Central Italy between July and September 1992; the Poisson distribution assumption implied 4 water samples, the assumption of negative binomial distribution implied 21 samples. Coliform organisms were used as indicators of water safety. The network consisted of two pipe rings and two wells fed by the same water source. The number of summer customers varied considerably from 3,000 to 20,000. The mean density was 2.33 coliforms/100 ml (sd= 5.29) for 21 samples and 3 coliforms/100 ml (sd= 6) for four samples. However the hypothesis of homogeneity was rejected (p-value <0.001) and the probability of II type error with the assumption of heterogeneity was higher with 4 samples (beta= 0.24) than with 21 (beta= 0.05). For this small network, determining the samples' size according to heterogeneity hypothesis strengthens the statement that water is drinkable compared with homogeneity assumption.

  14. Structural characterization of casein micelles: shape changes during film formation.

    PubMed

    Gebhardt, R; Vendrely, C; Kulozik, U

    2011-11-09

    The objective of this study was to determine the effect of size-fractionation by centrifugation on the film structure of casein micelles. Fractionated casein micelles in solution were asymmetrically distributed with a small distribution width as measured by dynamic light scattering. Films prepared from the size-fractionated samples showed a smooth surface in optical microscopy images and a homogeneous microstructure in atomic force micrographs. The nano- and microstructure of casein films was probed by micro-beam grazing incidence small angle x-ray scattering (μGISAXS). Compared to the solution measurements, the sizes determined in the film were larger and broadly distributed. The measured GISAXS patterns clearly deviate from those simulated for a sphere and suggest a deformation of the casein micelles in the film. © 2011 IOP Publishing Ltd

  15. Freeze-frame fruit selection by birds

    USGS Publications Warehouse

    Foster, Mercedes S.

    2008-01-01

    The choice of fruits by an avian frugivore is affected by choices it makes at multiple hierarchical levels (e.g., species of fruit, individual tree, individual fruit). Factors that influence those choices vary among levels in the hierarchy and include characteristics of the environment, the tree, and the fruit itself. Feeding experiments with wild-caught birds were conducted at El Tirol, Departamento de Itapua, Paraguay to test whether birds were selecting among individual fruits based on fruit size. Feeding on larger fruits, which have proportionally more pulp, is generally more efficient than feeding on small fruits. In trials (n = 56) with seven species of birds in four families, birds selected larger fruits 86% of the time. However, in only six instances were size differences significant, which is likely a reflection of small sample sizes.

  16. Preparing Monodisperse Macromolecular Samples for Successful Biological Small-Angle X-ray and Neutron Scattering Experiments

    PubMed Central

    Jeffries, Cy M.; Graewert, Melissa A.; Blanchet, Clément E.; Langley, David B.; Whitten, Andrew E.; Svergun, Dmitri I

    2017-01-01

    Small-angle X-ray and neutron scattering (SAXS and SANS) are techniques used to extract structural parameters and determine the overall structures and shapes of biological macromolecules, complexes and assemblies in solution. The scattering intensities measured from a sample contain contributions from all atoms within the illuminated sample volume including the solvent and buffer components as well as the macromolecules of interest. In order to obtain structural information, it is essential to prepare an exactly matched solvent blank so that background scattering contributions can be accurately subtracted from the sample scattering to obtain the net scattering from the macromolecules in the sample. In addition, sample heterogeneity caused by contaminants, aggregates, mismatched solvents, radiation damage or other factors can severely influence and complicate data analysis so it is essential that the samples are pure and monodisperse for the duration of the experiment. This Protocol outlines the basic physics of SAXS and SANS and reveals how the underlying conceptual principles of the techniques ultimately ‘translate’ into practical laboratory guidance for the production of samples of sufficiently high quality for scattering experiments. The procedure describes how to prepare and characterize protein and nucleic acid samples for both SAXS and SANS using gel electrophoresis, size exclusion chromatography and light scattering. Also included are procedures specific to X-rays (in-line size exclusion chromatography SAXS) and neutrons, specifically preparing samples for contrast matching/variation experiments and deuterium labeling of proteins. PMID:27711050

  17. Lot quality assurance sampling techniques in health surveys in developing countries: advantages and current constraints.

    PubMed

    Lanata, C F; Black, R E

    1991-01-01

    Traditional survey methods, which are generally costly and time-consuming, usually provide information at the regional or national level only. The utilization of lot quality assurance sampling (LQAS) methodology, developed in industry for quality control, makes it possible to use small sample sizes when conducting surveys in small geographical or population-based areas (lots). This article describes the practical use of LQAS for conducting health surveys to monitor health programmes in developing countries. Following a brief description of the method, the article explains how to build a sample frame and conduct the sampling to apply LQAS under field conditions. A detailed description of the procedure for selecting a sampling unit to monitor the health programme and a sample size is given. The sampling schemes utilizing LQAS applicable to health surveys, such as simple- and double-sampling schemes, are discussed. The interpretation of the survey results and the planning of subsequent rounds of LQAS surveys are also discussed. When describing the applicability of LQAS in health surveys in developing countries, the article considers current limitations for its use by health planners in charge of health programmes, and suggests ways to overcome these limitations through future research. It is hoped that with increasing attention being given to industrial sampling plans in general, and LQAS in particular, their utilization to monitor health programmes will provide health planners in developing countries with powerful techniques to help them achieve their health programme targets.

  18. Testing the 'island rule' for a tenebrionid beetle (Coleoptera, Tenebrionidae)

    NASA Astrophysics Data System (ADS)

    Palmer, Miquel

    2002-05-01

    Insular populations and their closest mainland counterparts commonly display body size differences that are considered to fit the island rule, a theoretical framework to explain both dwarfism and gigantism in isolated animal populations. The island rule is used to explain the pattern of change of body size at the inter-specific level. But the model implicitly makes also a prediction for the body size of isolated populations of a single species. It suggests that, for a hypothetical species covering a wide range of island sizes, there exists a specific island size where this species reaches the largest body size. Body size would be small (in relative terms) in the smallest islets of the species range. It would increase with island size, and reach a maximum at some specific island size. However, additional increases from such a specific island size would instead promote body size reduction, and small (in relative terms) body sizes would be found again on the largest islands. The biogeographical patterns predicted by the island rule have been described and analysed for vertebrates only (mainly mammals), but remain largely untested for insects or other invertebrates. I analyse here the pattern of body size variation between seven isolated insular populations of a flightless beetle, Asida planipennis (Coleoptera, Tenebrionidae). This is an endemic species of Mallorca, Menorca and a number of islands and islets in the Balearic archipelago (western Mediterranean). The study covers seven of the 15 known populations (i.e., there are only 15 islands or islets inhabited by the species). The populations studied fit the pattern advanced above and we could, therefore, extrapolate the island rule to a very different kind of organism. However, the small sample size of some of the populations invites some caution at this early stage.

  19. Acoustic Purification of Extracellular Microvesicles

    PubMed Central

    Lee, Kyungheon; Shao, Huilin; Weissleder, Ralph; Lee, Hakho

    2015-01-01

    Microvesicles (MVs) are an increasingly important source for biomarker discovery and clinical diagnostics. The small size of MVs and their presence in complex biological environment, however, pose practical technical challenges, particularly when sample volumes are small. We herein present an acoustic nano-filter system that size-specifically separates MVs in a continuous and contact-free manner. The separation is based on ultrasound standing waves that exert differential acoustic force on MVs according to their size and density. By optimizing the design of the ultrasound transducers and underlying electronics, we were able to achieve a high separation yield and resolution. The “filter size-cutoff” can be controlled electronically in situ and enables versatile MV-size selection. We applied the acoustic nano-filter to isolate nanoscale (<200 nm) vesicles from cell culture media as well as MVs in stored red blood cell products. With the capacity for rapid and contact-free MV isolation, the developed system could become a versatile preparatory tool for MV analyses. PMID:25672598

  20. Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature.

    PubMed

    Szucs, Denes; Ioannidis, John P A

    2017-03-01

    We have empirically assessed the distribution of published effect sizes and estimated power by analyzing 26,841 statistical records from 3,801 cognitive neuroscience and psychology papers published recently. The reported median effect size was D = 0.93 (interquartile range: 0.64-1.46) for nominally statistically significant results and D = 0.24 (0.11-0.42) for nonsignificant results. Median power to detect small, medium, and large effects was 0.12, 0.44, and 0.73, reflecting no improvement through the past half-century. This is so because sample sizes have remained small. Assuming similar true effect sizes in both disciplines, power was lower in cognitive neuroscience than in psychology. Journal impact factors negatively correlated with power. Assuming a realistic range of prior probabilities for null hypotheses, false report probability is likely to exceed 50% for the whole literature. In light of our findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience.

  1. Conceptual data sampling for breast cancer histology image classification.

    PubMed

    Rezk, Eman; Awan, Zainab; Islam, Fahad; Jaoua, Ali; Al Maadeed, Somaya; Zhang, Nan; Das, Gautam; Rajpoot, Nasir

    2017-10-01

    Data analytics have become increasingly complicated as the amount of data has increased. One technique that is used to enable data analytics in large datasets is data sampling, in which a portion of the data is selected to preserve the data characteristics for use in data analytics. In this paper, we introduce a novel data sampling technique that is rooted in formal concept analysis theory. This technique is used to create samples reliant on the data distribution across a set of binary patterns. The proposed sampling technique is applied in classifying the regions of breast cancer histology images as malignant or benign. The performance of our method is compared to other classical sampling methods. The results indicate that our method is efficient and generates an illustrative sample of small size. It is also competing with other sampling methods in terms of sample size and sample quality represented in classification accuracy and F1 measure. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Measurement of particulates

    NASA Technical Reports Server (NTRS)

    Woods, D.

    1980-01-01

    The size distributions of particles in the exhaust plumes from the Titan rockets launched in August and September 1977 were determined from in situ measurements made from a small sampling aircraft that flew through the plumes. Two different sampling instruments were employed, a quartz crystal microbalance (QCM) cascade impactor and a forward scattering spectrometer probe (FSSP). The QCM measured the nonvolatile component of the aerosols in the plume covering an aerodynamic size ranging from 0.05 to 25 micrometers diameter. The FSSP, flown outside the aircraft under the nose section, measured both the liquid droplets and the solid particles over a size range from 0.5 to 7.5 micrometers in diameter. The particles were counted and classified into 15 size intervals. The presence of a large number of liquid droplets in the exhaust clouds is discussed and data are plotted for each launch and compared.

  3. A comparison of machine learning methods for classification using simulation with multiple real data examples from mental health studies.

    PubMed

    Khondoker, Mizanur; Dobson, Richard; Skirrow, Caroline; Simmons, Andrew; Stahl, Daniel

    2016-10-01

    Recent literature on the comparison of machine learning methods has raised questions about the neutrality, unbiasedness and utility of many comparative studies. Reporting of results on favourable datasets and sampling error in the estimated performance measures based on single samples are thought to be the major sources of bias in such comparisons. Better performance in one or a few instances does not necessarily imply so on an average or on a population level and simulation studies may be a better alternative for objectively comparing the performances of machine learning algorithms. We compare the classification performance of a number of important and widely used machine learning algorithms, namely the Random Forests (RF), Support Vector Machines (SVM), Linear Discriminant Analysis (LDA) and k-Nearest Neighbour (kNN). Using massively parallel processing on high-performance supercomputers, we compare the generalisation errors at various combinations of levels of several factors: number of features, training sample size, biological variation, experimental variation, effect size, replication and correlation between features. For smaller number of correlated features, number of features not exceeding approximately half the sample size, LDA was found to be the method of choice in terms of average generalisation errors as well as stability (precision) of error estimates. SVM (with RBF kernel) outperforms LDA as well as RF and kNN by a clear margin as the feature set gets larger provided the sample size is not too small (at least 20). The performance of kNN also improves as the number of features grows and outplays that of LDA and RF unless the data variability is too high and/or effect sizes are too small. RF was found to outperform only kNN in some instances where the data are more variable and have smaller effect sizes, in which cases it also provide more stable error estimates than kNN and LDA. Applications to a number of real datasets supported the findings from the simulation study. © The Author(s) 2013.

  4. Filling in the Gaps: Xenoliths in Meteorites are Samples of "Missing" Asteroid Lithologies

    NASA Technical Reports Server (NTRS)

    Zolensky, Mike

    2016-01-01

    We know that the stones that fall to earth as meteorites are not representative of the full diversity of small solar system bodies, because of the peculiarities of the dynamical processes that send material into Earth-crossing paths [1] which result in severe selection biases. Thus, the bulk of the meteorites that fall are insufficient to understand the full range of early solar system processes. However, the situation is different for pebble- and smaller-sized objects that stream past the giant planets and asteroid belts into the inner solar system in a representative manner. Thus, micrometeorites and interplanetary dust particles have been exploited to permit study of objects that do not provide meteorites to earth. However, there is another population of materials that sample a larger range of small solar system bodies, but which have received little attention - pebble-sized foreign clasts in meteorites (also called xenoliths, dark inclusions, clasts, etc.). Unfortunately, most previous studies of these clasts have been misleading, in that these objects have simply been identified as pieces of CM or CI chondrites. In our work we have found this to be generally erroneous, and that CM and especially CI clasts are actually rather rare. We therefore test the hypothesis that these clasts sample the full range of small solar system bodies. We have located and obtained samples of clasts in 81 different meteorites, and have begun a thorough characterization of the bulk compositions, mineralogies, petrographies, and organic compositions of this unique sample set. In addition to the standard e-beam analyses, recent advances in technology now permit us to measure bulk O isotopic compositions, and major- though trace-element compositions of the sub-mm-sized discrete clasts. Detailed characterization of these clasts permit us to explore the full range of mineralogical and petrologic processes in the early solar system, including the nature of fluids in the Kuiper belt and the outer main asteroid belt, as revealed by the mineralogy of secondary phases.

  5. Economic Analysis of a Multi-Site Prevention Program: Assessment of Program Costs and Characterizing Site-level Variability

    PubMed Central

    Corso, Phaedra S.; Ingels, Justin B.; Kogan, Steven M.; Foster, E. Michael; Chen, Yi-Fu; Brody, Gene H.

    2013-01-01

    Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95% confidence interval) incremental difference was $2149 ($397, $3901). With the probabilistic sensitivity analysis approach, the incremental difference was $2583 ($778, $4346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention. PMID:23299559

  6. Economic analysis of a multi-site prevention program: assessment of program costs and characterizing site-level variability.

    PubMed

    Corso, Phaedra S; Ingels, Justin B; Kogan, Steven M; Foster, E Michael; Chen, Yi-Fu; Brody, Gene H

    2013-10-01

    Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95 % confidence interval) incremental difference was $2,149 ($397, $3,901). With the probabilistic sensitivity analysis approach, the incremental difference was $2,583 ($778, $4,346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention.

  7. Bayesian evaluation of effect size after replicating an original study

    PubMed Central

    van Aert, Robbie C. M.; van Assen, Marcel A. L. M.

    2017-01-01

    The vast majority of published results in the literature is statistically significant, which raises concerns about their reliability. The Reproducibility Project Psychology (RPP) and Experimental Economics Replication Project (EE-RP) both replicated a large number of published studies in psychology and economics. The original study and replication were statistically significant in 36.1% in RPP and 68.8% in EE-RP suggesting many null effects among the replicated studies. However, evidence in favor of the null hypothesis cannot be examined with null hypothesis significance testing. We developed a Bayesian meta-analysis method called snapshot hybrid that is easy to use and understand and quantifies the amount of evidence in favor of a zero, small, medium and large effect. The method computes posterior model probabilities for a zero, small, medium, and large effect and adjusts for publication bias by taking into account that the original study is statistically significant. We first analytically approximate the methods performance, and demonstrate the necessity to control for the original study’s significance to enable the accumulation of evidence for a true zero effect. Then we applied the method to the data of RPP and EE-RP, showing that the underlying effect sizes of the included studies in EE-RP are generally larger than in RPP, but that the sample sizes of especially the included studies in RPP are often too small to draw definite conclusions about the true effect size. We also illustrate how snapshot hybrid can be used to determine the required sample size of the replication akin to power analysis in null hypothesis significance testing and present an easy to use web application (https://rvanaert.shinyapps.io/snapshot/) and R code for applying the method. PMID:28388646

  8. Use of care management practices in small- and medium-sized physician groups: do public reporting of physician quality and financial incentives matter?

    PubMed

    Alexander, Jeffrey A; Maeng, Daniel; Casalino, Lawrence P; Rittenhouse, Diane

    2013-04-01

    To examine the effect of public reporting (PR) and financial incentives tied to quality performance on the use of care management practices (CMPs) among small- and medium-sized physician groups. Survey data from The National Study of Small and Medium-sized Physician Practices were used. Primary data collection was also conducted to assess community-level PR activities. The final sample included 643 practices engaged in quality reporting; about half of these practices were subject to PR. We used a treatment effects model. The instrumental variables were the community-level variables that capture the level of PR activity in each community in which the practices operate. (1) PR is associated with increased use of CMPs, but the estimate is not statistically significant; (2) financial incentives are associated with greater use of CMPs; (3) practices' awareness/sensitivity to quality reports is positively related to their use of CMPs; and (4) combined PR and financial incentives jointly affect CMP use to a greater degree than either of these factors alone. Small- to medium-sized practices appear to respond to PR and financial incentives by greater use of CMPs. Future research needs to investigate the appropriate mix and type of incentive arrangements and quality reporting. © Health Research and Educational Trust.

  9. A small, sensitive, light-weight, and disposable aerosol spectrometer for balloon and UAV applications

    NASA Astrophysics Data System (ADS)

    Fahey, D. W.; Gao, R.; Thornberry, T. D.; Rollins, D. W.; Schwarz, J. P.; Perring, A. E.

    2013-12-01

    In-situ sampling with particle size spectrometers is an important method to provide detailed size spectra for atmospheric aerosol in the troposphere and stratosphere. The spectra are essential for understanding aerosol sources and aerosol chemical evolution and removal, and for aerosol remote sensing validation. These spectrometers are usually bulky, heavy, and expensive, thereby limiting their application to specific airborne platforms. Here we report a new type of small and light-weight optical aerosol particle size spectrometer that is sensitive enough for many aerosol applications yet is inexpensive enough to be disposable. 3D printing is used for producing structural components for simplicity and low cost. Weighing less than 1 kg individually, we expect these spectrometers can be deployed successfully on small unmanned aircraft systems (UASs) and up to 25 km on weather balloons. Immediate applications include the study of Arctic haze using the Manta UAS, detection of the Asian Tropopause Aerosol Layer in the Asian monsoon system and SAGE III validation onboard weather balloons.

  10. Effect of capping and particle size on Raman laser-induced degradation of {gamma}-Fe{sub 2}O{sub 3} nanoparticles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varadwaj, K.S.K.; Panigrahi, M.K.; Ghose, J.

    2004-11-01

    Diol capped {gamma}-Fe{sub 2}O{sub 3} nanoparticles are prepared from ferric nitrate by refluxing in 1,4-butanediol (9.5nm) and 1,5-pentanediol (15nm) and uncapped particles are prepared by refluxing in 1,2-propanediol followed by sintering the alkoxide formed. X-ray diffraction (XRD) shows that all the samples have the spinel phase. Raman spectroscopy shows that the samples prepared in 1,4-butanediol and 1,5-pentanediol and 1,2-propanediol (sintered at 573 and 673K) are {gamma}-Fe{sub 2}O{sub 3} and the 773K-sintered sample is Fe{sub 3}O{sub 4}. Raman laser studies carried out at various laser powers show that all the samples undergo laser-induced degradation to {alpha}-Fe{sub 2}O{sub 3} at higher lasermore » power. The capped samples are however, found more stable to degradation than the uncapped samples. The stability of {gamma}-Fe{sub 2}O{sub 3} sample with large particle size (15.4nm) is more than the sample with small particle size (10.2nm). Fe{sub 3}O{sub 4} having a particle size of 48nm is however less stable than the smaller {gamma}-Fe{sub 2}O{sub 3} nanoparticles.« less

  11. US Food and Drug Administration survey of methyl mercury in canned tuna

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yess, J.

    1993-01-01

    Methyl mercury was determined by the US Food and Drug Administration (FDA) in 220 samples of canned tuna collected in 1991. Samples were chosen to represent different styles, colors, and packs as available. Emphasis was placed on water-packed tuna, small can size, and the highest-volume brand names. The average methyl mercury (expressed as Hg) found for the 220 samples was 0.17 ppm; the range was <0.10-0.75 ppm. Statistically, a significantly higher level of methyl mercury was found in solid white and chunk tuna. Methyl mercury level was not related to can size. None of the 220 samples had methyl mercurymore » levels that exceeded the 1 ppm FDA action level. 11 refs., 1 tab.« less

  12. Improvements to sample processing and measurement to enable more widespread environmental application of tritium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moran, James; Alexander, Thomas; Aalseth, Craig

    2017-08-01

    Previous measurements have demonstrated the wealth of information that tritium (T) can provide on environmentally relevant processes. We present modifications to sample preparation approaches that enable T measurement by proportional counting on small sample sizes equivalent to 120 mg of water and demonstrate the accuracy of these methods on a suite of standardized water samples. This enhanced method should provide the analytical flexibility needed to address persistent knowledge gaps in our understanding of T behavior in the environment.

  13. Nondestructive Analysis of Astromaterials by Micro-CT and Micro-XRF Analysis for PET Examination

    NASA Technical Reports Server (NTRS)

    Zeigler, R. A.; Righter, K.; Allen, C. C.

    2013-01-01

    An integral part of any sample return mission is the initial description and classification of returned samples by the preliminary examination team (PET). The goal of the PET is to characterize and classify returned samples and make this information available to the larger research community who then conduct more in-depth studies on the samples. The PET tries to minimize the impact their work has on the sample suite, which has in the past limited the PET work to largely visual, nonquantitative measurements (e.g., optical microscopy). More modern techniques can also be utilized by a PET to nondestructively characterize astromaterials in much more rigorous way. Here we discuss our recent investigations into the applications of micro-CT and micro-XRF analyses with Apollo samples and ANSMET meteorites and assess the usefulness of these techniques in future PET. Results: The application of micro computerized tomography (micro-CT) to astromaterials is not a new concept. The technique involves scanning samples with high-energy x-rays and constructing 3-dimensional images of the density of materials within the sample. The technique can routinely measure large samples (up to approx. 2700 cu cm) with a small individual voxel size (approx. 30 cu m), and has the sensitivity to distinguish the major rock forming minerals and identify clast populations within brecciated samples. We have recently run a test sample of a terrestrial breccia with a carbonate matrix and multiple igneous clast lithologies. The test results are promising and we will soon analyze a approx. 600 g piece of Apollo sample 14321 to map out the clast population within the sample. Benchtop micro x-ray fluorescence (micro-XRF) instruments can rapidly scan large areas (approx. 100 sq cm) with a small pixel size (approx. 25 microns) and measure the (semi) quantitative composition of largely unprepared surfaces for all elements between Be and U, often with sensitivity on the order of a approx. 100 ppm. Our recent testing of meteorite and Apollo samples on micro-XRF instruments has shown that they can easily detect small zircons and phosphates (approx. 10 m), distinguish different clast lithologies within breccias, and identify different lithologies within small rock fragments (2-4 mm soil Apollo soil fragments).

  14. Non-destructive controlled single-particle light scattering measurement

    NASA Astrophysics Data System (ADS)

    Maconi, G.; Penttilä, A.; Kassamakov, I.; Gritsevich, M.; Helander, P.; Puranen, T.; Salmi, A.; Hæggström, E.; Muinonen, K.

    2018-01-01

    We present a set of light scattering data measured from a millimeter-sized extraterrestrial rock sample. The data were acquired by our novel scatterometer, which enables accurate multi-wavelength measurements of single-particle samples whose position and orientation are controlled by ultrasonic levitation. The measurements demonstrate a non-destructive approach to derive optical properties of small mineral samples. This enables research on valuable materials, such as those returned from space missions or rare meteorites.

  15. Liquid chromatography-mass spectrometry platform for both small neurotransmitters and neuropeptides in blood, with automatic and robust solid phase extraction

    NASA Astrophysics Data System (ADS)

    Johnsen, Elin; Leknes, Siri; Wilson, Steven Ray; Lundanes, Elsa

    2015-03-01

    Neurons communicate via chemical signals called neurotransmitters (NTs). The numerous identified NTs can have very different physiochemical properties (solubility, charge, size etc.), so quantification of the various NT classes traditionally requires several analytical platforms/methodologies. We here report that a diverse range of NTs, e.g. peptides oxytocin and vasopressin, monoamines adrenaline and serotonin, and amino acid GABA, can be simultaneously identified/measured in small samples, using an analytical platform based on liquid chromatography and high-resolution mass spectrometry (LC-MS). The automated platform is cost-efficient as manual sample preparation steps and one-time-use equipment are kept to a minimum. Zwitter-ionic HILIC stationary phases were used for both on-line solid phase extraction (SPE) and liquid chromatography (capillary format, cLC). This approach enabled compounds from all NT classes to elute in small volumes producing sharp and symmetric signals, and allowing precise quantifications of small samples, demonstrated with whole blood (100 microliters per sample). An additional robustness-enhancing feature is automatic filtration/filter back-flushing (AFFL), allowing hundreds of samples to be analyzed without any parts needing replacement. The platform can be installed by simple modification of a conventional LC-MS system.

  16. Phase transformations in a Cu−Cr alloy induced by high pressure torsion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korneva, Anna, E-mail: a.korniewa@imim.pl; Straumal, Boris; Institut für Nanotechnologie, Karlsruher Institut für Technologie, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen

    2016-04-15

    Phase transformations induced by high pressure torsion (HPT) at room temperature in two samples of the Cu-0.86 at.% Cr alloy, pre-annealed at 550 °C and 1000 °C, were studied in order to obtain two different initial states for the HPT procedure. Observation of microstructure of the samples before HPT revealed that the sample annealed at 550 °C contained two types of Cr precipitates in the Cu matrix: large particles (size about 500 nm) and small ones (size about 70 nm). The sample annealed at 1000 °C showed only a little fraction of Cr precipitates (size about 2 μm). The subsequentmore » HPT process resulted in the partial dissolution of Cr precipitates in the first sample and dissolution of Cr precipitates with simultaneous decomposition of the supersaturated solid solution in another. However, the resulting microstructure of the samples after HPT was very similar from the standpoint of grain size, phase composition, texture analysis and hardness measurements. - Highlights: • Cu−Cr alloy with two different initial states was deformed by HPT. • Phase transformations in the deformed materials were studied. • SEM, TEM and X-ray diffraction techniques were used for microstructure analysis. • HPT leads to formation the same microstructure independent of the initial state.« less

  17. Efficient inference of population size histories and locus-specific mutation rates from large-sample genomic variation data.

    PubMed

    Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S

    2015-02-01

    With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.

  18. Size effects in olivine control strength in low-temperature plasticity regime

    NASA Astrophysics Data System (ADS)

    Kumamoto, K. M.; Thom, C.; Wallis, D.; Hansen, L. N.; Armstrong, D. E. J.; Goldsby, D. L.; Warren, J. M.; Wilkinson, A. J.

    2017-12-01

    The strength of the lithospheric mantle during deformation by low-temperature plasticity controls a range of geological phenomena, including lithospheric-scale strain localization, the evolution of friction on deep seismogenic faults, and the flexure of tectonic plates. However, constraints on the strength of olivine in this deformation regime are difficult to obtain from conventional rock-deformation experiments, and previous results vary considerably. We demonstrate via nanoindentation that the strength of olivine in the low-temperature plasticity regime is dependent on the length-scale of the test, with experiments on smaller volumes of material exhibiting larger yield stresses. This "size effect" has previously been explained in engineering materials as a result of the role of strain gradients and associated geometrically necessary dislocations in modifying plastic behavior. The Hall-Petch effect, in which a material with a small grain size exhibits a higher strength than one with a large grain size, is thought to arise from the same mechanism. The presence of a size effect resolves discrepancies among previous experimental measurements of olivine, which were either conducted using indentation methods or were conducted on polycrystalline samples with small grain sizes. An analysis of different low-temperature plasticity flow laws extrapolated to room temperature reveals a power-law relationship between length-scale (grain size for polycrystalline deformation and contact radius for indentation tests) and yield strength. This suggests that data from samples with large inherent length scales best represent the plastic strength of the coarse-grained lithospheric mantle. Additionally, the plastic deformation of nanometer- to micrometer-sized asperities on fault surfaces may control the evolution of fault roughness due to their size-dependent strength.

  19. Differential Item Functioning for Accommodated Students with Disabilities: Effect of Differences in Proficiency Distributions

    ERIC Educational Resources Information Center

    Quesen, Sarah

    2016-01-01

    When studying differential item functioning (DIF) with students with disabilities (SWD) focal groups typically suffer from small sample size, whereas the reference group population is usually large. This makes it possible for a researcher to select a sample from the reference population to be similar to the focal group on the ability scale. Doing…

  20. Normal Approximations to the Distributions of the Wilcoxon Statistics: Accurate to What "N"? Graphical Insights

    ERIC Educational Resources Information Center

    Bellera, Carine A.; Julien, Marilyse; Hanley, James A.

    2010-01-01

    The Wilcoxon statistics are usually taught as nonparametric alternatives for the 1- and 2-sample Student-"t" statistics in situations where the data appear to arise from non-normal distributions, or where sample sizes are so small that we cannot check whether they do. In the past, critical values, based on exact tail areas, were…

  1. Does Decision Quality (Always) Increase with the Size of Information Samples? Some Vicissitudes in Applying the Law of Large Numbers

    ERIC Educational Resources Information Center

    Fiedler, Klaus; Kareev, Yaakov

    2006-01-01

    Adaptive decision making requires that contingencies between decision options and their relative assets be assessed accurately and quickly. The present research addresses the challenging notion that contingencies may be more visible from small than from large samples of observations. An algorithmic account for such a seemingly paradoxical effect…

  2. Theory of Mind Development in Chinese Children: A Meta-Analysis of False-Belief Understanding across Cultures and Languages

    ERIC Educational Resources Information Center

    Liu, David; Wellman, Henry M.; Tardif, Twila; Sabbagh, Mark A.

    2008-01-01

    Theory of mind is claimed to develop universally among humans across cultures with vastly different folk psychologies. However, in the attempt to test and confirm a claim of universality, individual studies have been limited by small sample sizes, sample specificities, and an overwhelming focus on Anglo-European children. The current meta-analysis…

  3. Using random telephone sampling to recruit generalizable samples for family violence studies.

    PubMed

    Slep, Amy M Smith; Heyman, Richard E; Williams, Mathew C; Van Dyke, Cheryl E; O'Leary, Susan G

    2006-12-01

    Convenience sampling methods predominate in recruiting for laboratory-based studies within clinical and family psychology. The authors used random digit dialing (RDD) to determine whether they could feasibly recruit generalizable samples for 2 studies (a parenting study and an intimate partner violence study). RDD screen response rate was 42-45%; demographics matched those in the 2000 U.S. Census, with small- to medium-sized differences on race, age, and income variables. RDD respondents who qualified for, but did not participate in, the laboratory study of parents showed small differences on income, couple conflicts, and corporal punishment. Time and cost are detailed, suggesting that RDD may be a feasible, effective method by which to recruit more generalizable samples for in-laboratory studies of family violence when those studies have sufficient resources. (c) 2006 APA, all rights reserved.

  4. An intercomparison of the taxonomic and size composition of tropical macrozooplankton and micronekton collected using three sampling gears

    NASA Astrophysics Data System (ADS)

    Kwong, Lian E.; Pakhomov, Evgeny A.; Suntsov, Andrey V.; Seki, Michael P.; Brodeur, Richard D.; Pakhomova, Larisa G.; Domokos, Réka

    2018-05-01

    A micronekton intercalibration experiment was conducted off the southwest coast of Oahu Island, Hawaii in October 2004. Day and night samples were collected in the epipelagic and mesopelagic zones using three micronekton sampling gears: the Cobb Trawl, the Isaacs-Kidd Midwater Trawl (IKMT), and the Hokkaido University Frame Trawl (HUFT). Taxonomic composition and contribution by main size groups to total catch varied among gear types. However, the three gears exhibited similar taxonomic composition for macrozooplankton and micronekton ranging from 20 to 100 mm length (MM20-100). The HUFT and IKMT captured more mesozooplankton and small MM20-100, while the Cobb trawl selected towards larger MM20-100 and nekton. Taxonomic composition was described and inter-compared among gears. The relative efficacy of the three gears was assessed, and size dependent intercalibration coefficients were developed for MM20-100.

  5. Delayed reward discounting and addictive behavior: a meta-analysis.

    PubMed

    MacKillop, James; Amlung, Michael T; Few, Lauren R; Ray, Lara A; Sweet, Lawrence H; Munafò, Marcus R

    2011-08-01

    Delayed reward discounting (DRD) is a behavioral economic index of impulsivity and numerous studies have examined DRD in relation to addictive behavior. To synthesize the findings across the literature, the current review is a meta-analysis of studies comparing DRD between criterion groups exhibiting addictive behavior and control groups. The meta-analysis sought to characterize the overall patterns of findings, systematic variability by sample and study type, and possible small study (publication) bias. Literature reviews identified 310 candidate articles from which 46 studies reporting 64 comparisons were identified (total N=56,013). From the total comparisons identified, a small magnitude effect was evident (d= .15; p< .00001) with very high heterogeneity of effect size. Based on systematic observed differences, large studies assessing DRD with a small number of self-report items were removed and an analysis of 57 comparisons (n=3,329) using equivalent methods and exhibiting acceptable heterogeneity revealed a medium magnitude effect (d= .58; p< .00001). Further analyses revealed significantly larger effect sizes for studies using clinical samples (d= .61) compared with studies using nonclinical samples (d=.45). Indices of small study bias among the various comparisons suggested varying levels of influence by unpublished findings, ranging from minimal to moderate. These results provide strong evidence of greater DRD in individuals exhibiting addictive behavior in general and particularly in individuals who meet criteria for an addictive disorder. Implications for the assessment of DRD and research priorities are discussed.

  6. Delayed reward discounting and addictive behavior: a meta-analysis

    PubMed Central

    Amlung, Michael T.; Few, Lauren R.; Ray, Lara A.; Sweet, Lawrence H.; Munafò, Marcus R.

    2011-01-01

    Rationale Delayed reward discounting (DRD) is a behavioral economic index of impulsivity and numerous studies have examined DRD in relation to addictive behavior. To synthesize the findings across the literature, the current review is a meta-analysis of studies comparing DRD between criterion groups exhibiting addictive behavior and control groups. Objectives The meta-analysis sought to characterize the overall patterns of findings, systematic variability by sample and study type, and possible small study (publication) bias. Methods Literature reviews identified 310 candidate articles from which 46 studies reporting 64 comparisons were identified (total N=56,013). Results From the total comparisons identified, a small magnitude effect was evident (d=.15; p<.00001) with very high heterogeneity of effect size. Based on systematic observed differences, large studies assessing DRD with a small number of self-report items were removed and an analysis of 57 comparisons (n=3,329) using equivalent methods and exhibiting acceptable heterogeneity revealed a medium magnitude effect (d=.58; p<.00001). Further analyses revealed significantly larger effect sizes for studies using clinical samples (d=.61) compared with studies using nonclinical samples (d=.45). Indices of small study bias among the various comparisons suggested varying levels of influence by unpublished findings, ranging from minimal to moderate. Conclusions These results provide strong evidence of greater DRD in individuals exhibiting addictive behavior in general and particularly in individuals who meet criteria for an addictive disorder. Implications for the assessment of DRD and research priorities are discussed. PMID:21373791

  7. Systematic review finds major deficiencies in sample size methodology and reporting for stepped-wedge cluster randomised trials

    PubMed Central

    Martin, James; Taljaard, Monica; Girling, Alan; Hemming, Karla

    2016-01-01

    Background Stepped-wedge cluster randomised trials (SW-CRT) are increasingly being used in health policy and services research, but unless they are conducted and reported to the highest methodological standards, they are unlikely to be useful to decision-makers. Sample size calculations for these designs require allowance for clustering, time effects and repeated measures. Methods We carried out a methodological review of SW-CRTs up to October 2014. We assessed adherence to reporting each of the 9 sample size calculation items recommended in the 2012 extension of the CONSORT statement to cluster trials. Results We identified 32 completed trials and 28 independent protocols published between 1987 and 2014. Of these, 45 (75%) reported a sample size calculation, with a median of 5.0 (IQR 2.5–6.0) of the 9 CONSORT items reported. Of those that reported a sample size calculation, the majority, 33 (73%), allowed for clustering, but just 15 (33%) allowed for time effects. There was a small increase in the proportions reporting a sample size calculation (from 64% before to 84% after publication of the CONSORT extension, p=0.07). The type of design (cohort or cross-sectional) was not reported clearly in the majority of studies, but cohort designs seemed to be most prevalent. Sample size calculations in cohort designs were particularly poor with only 3 out of 24 (13%) of these studies allowing for repeated measures. Discussion The quality of reporting of sample size items in stepped-wedge trials is suboptimal. There is an urgent need for dissemination of the appropriate guidelines for reporting and methodological development to match the proliferation of the use of this design in practice. Time effects and repeated measures should be considered in all SW-CRT power calculations, and there should be clarity in reporting trials as cohort or cross-sectional designs. PMID:26846897

  8. Children's accuracy of portion size estimation using digital food images: effects of interface design and size of image on computer screen.

    PubMed

    Baranowski, Tom; Baranowski, Janice C; Watson, Kathleen B; Martin, Shelby; Beltran, Alicia; Islam, Noemi; Dadabhoy, Hafza; Adame, Su-heyla; Cullen, Karen; Thompson, Debbe; Buday, Richard; Subar, Amy

    2011-03-01

    To test the effect of image size and presence of size cues on the accuracy of portion size estimation by children. Children were randomly assigned to seeing images with or without food size cues (utensils and checked tablecloth) and were presented with sixteen food models (foods commonly eaten by children) in varying portion sizes, one at a time. They estimated each food model's portion size by selecting a digital food image. The same food images were presented in two ways: (i) as small, graduated portion size images all on one screen or (ii) by scrolling across large, graduated portion size images, one per sequential screen. Laboratory-based with computer and food models. Volunteer multi-ethnic sample of 120 children, equally distributed by gender and ages (8 to 13 years) in 2008-2009. Average percentage of correctly classified foods was 60·3 %. There were no differences in accuracy by any design factor or demographic characteristic. Multiple small pictures on the screen at once took half the time to estimate portion size compared with scrolling through large pictures. Larger pictures had more overestimation of size. Multiple images of successively larger portion sizes of a food on one computer screen facilitated quicker portion size responses with no decrease in accuracy. This is the method of choice for portion size estimation on a computer.

  9. Determining Plane-Sweep Sampling Points in Image Space Using the Cross-Ratio for Image-Based Depth Estimation

    NASA Astrophysics Data System (ADS)

    Ruf, B.; Erdnuess, B.; Weinmann, M.

    2017-08-01

    With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative poses between all frames are given.

  10. Surface ocean metabarcoding confirms limited diversity in planktonic foraminifera but reveals unknown hyper-abundant lineages.

    PubMed

    Morard, Raphaël; Garet-Delmas, Marie-José; Mahé, Frédéric; Romac, Sarah; Poulain, Julie; Kucera, Michal; de Vargas, Colomban

    2018-02-07

    Since the advent of DNA metabarcoding surveys, the planktonic realm is considered a treasure trove of diversity, inhabited by a small number of abundant taxa, and a hugely diverse and taxonomically uncharacterized consortium of rare species. Here we assess if the apparent underestimation of plankton diversity applies universally. We target planktonic foraminifera, a group of protists whose known morphological diversity is limited, taxonomically resolved and linked to ribosomal DNA barcodes. We generated a pyrosequencing dataset of ~100,000 partial 18S rRNA foraminiferal sequences from 32 size fractioned photic-zone plankton samples collected at 8 stations in the Indian and Atlantic Oceans during the Tara Oceans expedition (2009-2012). We identified 69 genetic types belonging to 41 morphotaxa in our metabarcoding dataset. The diversity saturated at local and regional scale as well as in the three size fractions and the two depths sampled indicating that the diversity of foraminifera is modest and finite. The large majority of the newly discovered lineages occur in the small size fraction, neglected by classical taxonomy. These unknown lineages dominate the bulk [>0.8 µm] size fraction, implying that a considerable part of the planktonic foraminifera community biomass has its origin in unknown lineages.

  11. Analysis of the typical small watershed of warping dams in the sand properties

    NASA Astrophysics Data System (ADS)

    Li, Li; Yang, Ji Shan; Sun, Wei Ying; Shen, Sha Sha

    2018-06-01

    Coarse sediment with a particle size greater than 0.05mm is the main deposit of riverbed in the lower Yellow River, the Loess Plateau is one of the concentrated source of coarse sediment, warping dam is one of the important engineering measures for gully control. Jiuyuangou basin is a typical small basin in the first sub region of hilly-gullied loess region, twenty warping dams in Jiuyuangou basin was selected as research object, samples of sediment along the main line of dam from upper, middle to lower reaches of dam fields and samples of undisturbed soil in slope of dam control basin were taken to carry out particle gradation analysis, in the hope of clearing reducing capacity on coarse sediment of different types of warping dam through the experimental data. The results show that the undisturbed soil in slope of dam control basin has characteristics of standard loess, the particle size are mainly distributed in 0.025 0.05mm, and the 0.05mm particle size of Jiuyuangou basinof loess is an obvious boundary; Particle size of sediment in 15 warping dam of Jiuyuangou basin are mainly distributed in 0.031 0.05mm with the dam tail is greater than dam front in general. The separation effect of horizontal pipe drainage is better than shaft drainage for which particle size greater than 0.05mm, notch dam is for particle size between 0.025 0.1 mm, and fill dam is for particle size between 0.016 0.1 mm, they all have a certain function in the sediment sorting.

  12. Cluster randomised crossover trials with binary data and unbalanced cluster sizes: application to studies of near-universal interventions in intensive care.

    PubMed

    Forbes, Andrew B; Akram, Muhammad; Pilcher, David; Cooper, Jamie; Bellomo, Rinaldo

    2015-02-01

    Cluster randomised crossover trials have been utilised in recent years in the health and social sciences. Methods for analysis have been proposed; however, for binary outcomes, these have received little assessment of their appropriateness. In addition, methods for determination of sample size are currently limited to balanced cluster sizes both between clusters and between periods within clusters. This article aims to extend this work to unbalanced situations and to evaluate the properties of a variety of methods for analysis of binary data, with a particular focus on the setting of potential trials of near-universal interventions in intensive care to reduce in-hospital mortality. We derive a formula for sample size estimation for unbalanced cluster sizes, and apply it to the intensive care setting to demonstrate the utility of the cluster crossover design. We conduct a numerical simulation of the design in the intensive care setting and for more general configurations, and we assess the performance of three cluster summary estimators and an individual-data estimator based on binomial-identity-link regression. For settings similar to the intensive care scenario involving large cluster sizes and small intra-cluster correlations, the sample size formulae developed and analysis methods investigated are found to be appropriate, with the unweighted cluster summary method performing well relative to the more optimal but more complex inverse-variance weighted method. More generally, we find that the unweighted and cluster-size-weighted summary methods perform well, with the relative efficiency of each largely determined systematically from the study design parameters. Performance of individual-data regression is adequate with small cluster sizes but becomes inefficient for large, unbalanced cluster sizes. When outcome prevalences are 6% or less and the within-cluster-within-period correlation is 0.05 or larger, all methods display sub-nominal confidence interval coverage, with the less prevalent the outcome the worse the coverage. As with all simulation studies, conclusions are limited to the configurations studied. We confined attention to detecting intervention effects on an absolute risk scale using marginal models and did not explore properties of binary random effects models. Cluster crossover designs with binary outcomes can be analysed using simple cluster summary methods, and sample size in unbalanced cluster size settings can be determined using relatively straightforward formulae. However, caution needs to be applied in situations with low prevalence outcomes and moderate to high intra-cluster correlations. © The Author(s) 2014.

  13. Pharmaceutical production of tableting granules in an ultra-small-scale high-shear granulator as a pre-formulation study.

    PubMed

    Ogawa, Tatsuya; Uchino, Tomohiro; Takahashi, Daisuke; Izumi, Tsuyoshi; Otsuka, Makoto

    2012-11-01

    In some of drug developments, the amount of bulk drug powder to use in early stages is limited and it is not easy to supply a sufficient drug amount for conventional preparation methods. Therefore, an ultra-small-scale high-shear granulator (less than 5 g) (USG) was developed and applied to small-scale granulation as a pre-formulation. The sample powder consisted of 66.5% lactose, 28.5% microcrystalline cellulose and 5.0% hydroxypropylcellulose. The granules were obtained to agitate 5 g of the sample powder with 1.0 mL of water at 300 rpm for 5 min after pre-powder mixing for 3 min by the USG and the manual hand (HM) methods. The granules were evaluated by the 10% and 90% accumulated particle size and the recoveries of the granules and the powder solid. Median particle size for the USG and the HM methods was 159.2 ± 2.3 and 270.9 ± 14.9 µm, respectively. The USG method had a narrower particle size distribution than those by the HM method. The recovery of the granules by USG was significantly larger than that by the HM method. Characteristics of all of the granules indicated that the USG method could produce higher quality granules within a shorter time than the HM methods.

  14. Spatial scale and sampling resolution affect measures of gap disturbance in a lowland tropical forest: implications for understanding forest regeneration and carbon storage

    PubMed Central

    Lobo, Elena; Dalling, James W.

    2014-01-01

    Treefall gaps play an important role in tropical forest dynamics and in determining above-ground biomass (AGB). However, our understanding of gap disturbance regimes is largely based either on surveys of forest plots that are small relative to spatial variation in gap disturbance, or on satellite imagery, which cannot accurately detect small gaps. We used high-resolution light detection and ranging data from a 1500 ha forest in Panama to: (i) determine how gap disturbance parameters are influenced by study area size, and the criteria used to define gaps; and (ii) to evaluate how accurately previous ground-based canopy height sampling can determine the size and location of gaps. We found that plot-scale disturbance parameters frequently differed significantly from those measured at the landscape-level, and that canopy height thresholds used to define gaps strongly influenced the gap-size distribution, an important metric influencing AGB. Furthermore, simulated ground surveys of canopy height frequently misrepresented the true location of gaps, which may affect conclusions about how relatively small canopy gaps affect successional processes and contribute to the maintenance of diversity. Across site comparisons need to consider how gap definition, scale and spatial resolution affect characterizations of gap disturbance, and its inferred importance for carbon storage and community composition. PMID:24452032

  15. The Population Structure of Glossina palpalis gambiensis from Island and Continental Locations in Coastal Guinea

    PubMed Central

    Solano, Philippe; Ravel, Sophie; Bouyer, Jeremy; Camara, Mamadou; Kagbadouno, Moise S.; Dyer, Naomi; Gardes, Laetitia; Herault, Damien; Donnelly, Martin J.; De Meeûs, Thierry

    2009-01-01

    Background We undertook a population genetics analysis of the tsetse fly Glossina palpalis gambiensis, a major vector of sleeping sickness in West Africa, using microsatellite and mitochondrial DNA markers. Our aims were to estimate effective population size and the degree of isolation between coastal sites on the mainland of Guinea and Loos Islands. The sampling locations encompassed Dubréka, the area with the highest Human African Trypanosomosis (HAT) prevalence in West Africa, mangrove and savannah sites on the mainland, and two islands, Fotoba and Kassa, within the Loos archipelago. These data are discussed with respect to the feasibility and sustainability of control strategies in those sites currently experiencing, or at risk of, sleeping sickness. Principal Findings We found very low migration rates between sites except between those sampled around the Dubréka area that seems to contain a widely dispersed and panmictic population. In the Kassa island samples, various effective population size estimates all converged on surprisingly small values (10

  16. Accurate and fast multiple-testing correction in eQTL studies.

    PubMed

    Sul, Jae Hoon; Raj, Towfique; de Jong, Simone; de Bakker, Paul I W; Raychaudhuri, Soumya; Ophoff, Roel A; Stranger, Barbara E; Eskin, Eleazar; Han, Buhm

    2015-06-04

    In studies of expression quantitative trait loci (eQTLs), it is of increasing interest to identify eGenes, the genes whose expression levels are associated with variation at a particular genetic variant. Detecting eGenes is important for follow-up analyses and prioritization because genes are the main entities in biological processes. To detect eGenes, one typically focuses on the genetic variant with the minimum p value among all variants in cis with a gene and corrects for multiple testing to obtain a gene-level p value. For performing multiple-testing correction, a permutation test is widely used. Because of growing sample sizes of eQTL studies, however, the permutation test has become a computational bottleneck in eQTL studies. In this paper, we propose an efficient approach for correcting for multiple testing and assess eGene p values by utilizing a multivariate normal distribution. Our approach properly takes into account the linkage-disequilibrium structure among variants, and its time complexity is independent of sample size. By applying our small-sample correction techniques, our method achieves high accuracy in both small and large studies. We have shown that our method consistently produces extremely accurate p values (accuracy > 98%) for three human eQTL datasets with different sample sizes and SNP densities: the Genotype-Tissue Expression pilot dataset, the multi-region brain dataset, and the HapMap 3 dataset. Copyright © 2015 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  17. Refinement of atomic and magnetic structures using neutron diffraction for synthesized bulk and nano-nickel zinc gallate ferrite

    NASA Astrophysics Data System (ADS)

    Ata-Allah, S. S.; Balagurov, A. M.; Hashhash, A.; Bobrikov, I. A.; Hamdy, Sh.

    2016-01-01

    The parent NiFe2O4 and Zn/Ga substituted spinel ferrite powders have been prepared by solid state reaction technique. As a typical example, the Ni0.7Zn0.3Fe1.5Ga0.5O4 sample has been prepared by sol-gel auto combustion method with the nano-scale crystallites size. X-ray and Mössbauer studies were carried out for the prepared samples. Structure and microstructure properties were investigated using the time-of-flight HRFD instrument at the IBR-2 pulsed reactor, at a temperatures range 15-473 K. The Rietveld refinement of the neutron diffraction data revealed that all samples possess cubic symmetry corresponding to the space group Fd3m. Cations distribution show that Ni2+ is a complete inverse spinel ion, while Ga3+ equally distributed between the two A and B-sublattices. The level of microstrains in bulk samples was estimated as very small while the size of coherently scattered domains is quite large. For nano-structured sample the domain size is around 120 Å.

  18. Surface enhanced Raman spectroscopy: A review of recent applications in forensic science

    NASA Astrophysics Data System (ADS)

    Fikiet, Marisia A.; Khandasammy, Shelby R.; Mistek, Ewelina; Ahmed, Yasmine; Halámková, Lenka; Bueno, Justin; Lednev, Igor K.

    2018-05-01

    Surface enhanced Raman spectroscopy has many advantages over its parent technique of Raman spectroscopy. Some of these advantages such as increased sensitivity and selectivity and therefore the possibility of small sample sizes and detection of small concentrations are invaluable in the field of forensics. A variety of new SERS surfaces and novel approaches are presented here on a wide range of forensically relevant topics.

  19. Small mammal habitat associations in poletimber and sawtimber stands of four forest cover types

    Treesearch

    Richard M. DeGraaf; Dana P. Snyder; Barbara J. Hill

    1991-01-01

    Small mammal distribution was examined in poletimber and sawtimber stands of four forest cover types in northern New England: northern hardwoods, red maple, balsam fir, and red spruce-balsam fir. During 1980 and 1981, eight stands on the White Mountain National Forest, NH, were sampled with four trap types (three sizes of snap traps and one pit-fall) for 16 000 trap-...

  20. A novel, efficient method for estimating the prevalence of acute malnutrition in resource-constrained and crisis-affected settings: A simulation study.

    PubMed

    Frison, Severine; Kerac, Marko; Checchi, Francesco; Nicholas, Jennifer

    2017-01-01

    The assessment of the prevalence of acute malnutrition in children under five is widely used for the detection of emergencies, planning interventions, advocacy, and monitoring and evaluation. This study examined PROBIT Methods which convert parameters (mean and standard deviation (SD)) of a normally distributed variable to a cumulative probability below any cut-off to estimate acute malnutrition in children under five using Middle-Upper Arm Circumference (MUAC). We assessed the performance of: PROBIT Method I, with mean MUAC from the survey sample and MUAC SD from a database of previous surveys; and PROBIT Method II, with mean and SD of MUAC observed in the survey sample. Specifically, we generated sub-samples from 852 survey datasets, simulating 100 surveys for eight sample sizes. Overall the methods were tested on 681 600 simulated surveys. PROBIT methods relying on sample sizes as small as 50 had better performance than the classic method for estimating and classifying the prevalence of acute malnutrition. They had better precision in the estimation of acute malnutrition for all sample sizes and better coverage for smaller sample sizes, while having relatively little bias. They classified situations accurately for a threshold of 5% acute malnutrition. Both PROBIT methods had similar outcomes. PROBIT Methods have a clear advantage in the assessment of acute malnutrition prevalence based on MUAC, compared to the classic method. Their use would require much lower sample sizes, thus enable great time and resource savings and permit timely and/or locally relevant prevalence estimates of acute malnutrition for a swift and well-targeted response.

  1. 77 FR 61745 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-11

    ... submitted by small entities. Out of a sample size of 3,100 for each wave of data collection, the USPTO... by fax to 202-395-5167, marked to the attention of Nicholas A. Fraser. Dated: October 5, 2012. Susan...

  2. 0-6760 : improved trip generation data for Texas using workplace and special generator surveys.

    DOT National Transportation Integrated Search

    2014-08-01

    Trip generation rates play an important role in : transportation planning, which can help in : making informed decisions about future : transportation investment and design. However, : sometimes the rates are derived from small : sample sizes or may ...

  3. Areal Control Using Generalized Least Squares As An Alternative to Stratification

    Treesearch

    Raymond L. Czaplewski

    2001-01-01

    Stratification for both variance reduction and areal control proliferates the number of strata, which causes small sample sizes in many strata. This might compromise statistical efficiency. Generalized least squares can, in principle, replace stratification for areal control.

  4. Testing non-inferiority of a new treatment in three-arm clinical trials with binary endpoints.

    PubMed

    Tang, Nian-Sheng; Yu, Bin; Tang, Man-Lai

    2014-12-18

    A two-arm non-inferiority trial without a placebo is usually adopted to demonstrate that an experimental treatment is not worse than a reference treatment by a small pre-specified non-inferiority margin due to ethical concerns. Selection of the non-inferiority margin and establishment of assay sensitivity are two major issues in the design, analysis and interpretation for two-arm non-inferiority trials. Alternatively, a three-arm non-inferiority clinical trial including a placebo is usually conducted to assess the assay sensitivity and internal validity of a trial. Recently, some large-sample approaches have been developed to assess the non-inferiority of a new treatment based on the three-arm trial design. However, these methods behave badly with small sample sizes in the three arms. This manuscript aims to develop some reliable small-sample methods to test three-arm non-inferiority. Saddlepoint approximation, exact and approximate unconditional, and bootstrap-resampling methods are developed to calculate p-values of the Wald-type, score and likelihood ratio tests. Simulation studies are conducted to evaluate their performance in terms of type I error rate and power. Our empirical results show that the saddlepoint approximation method generally behaves better than the asymptotic method based on the Wald-type test statistic. For small sample sizes, approximate unconditional and bootstrap-resampling methods based on the score test statistic perform better in the sense that their corresponding type I error rates are generally closer to the prespecified nominal level than those of other test procedures. Both approximate unconditional and bootstrap-resampling test procedures based on the score test statistic are generally recommended for three-arm non-inferiority trials with binary outcomes.

  5. ALCHEMY: a reliable method for automated SNP genotype calling for small batch sizes and highly homozygous populations

    PubMed Central

    Wright, Mark H.; Tung, Chih-Wei; Zhao, Keyan; Reynolds, Andy; McCouch, Susan R.; Bustamante, Carlos D.

    2010-01-01

    Motivation: The development of new high-throughput genotyping products requires a significant investment in testing and training samples to evaluate and optimize the product before it can be used reliably on new samples. One reason for this is current methods for automated calling of genotypes are based on clustering approaches which require a large number of samples to be analyzed simultaneously, or an extensive training dataset to seed clusters. In systems where inbred samples are of primary interest, current clustering approaches perform poorly due to the inability to clearly identify a heterozygote cluster. Results: As part of the development of two custom single nucleotide polymorphism genotyping products for Oryza sativa (domestic rice), we have developed a new genotype calling algorithm called ‘ALCHEMY’ based on statistical modeling of the raw intensity data rather than modelless clustering. A novel feature of the model is the ability to estimate and incorporate inbreeding information on a per sample basis allowing accurate genotyping of both inbred and heterozygous samples even when analyzed simultaneously. Since clustering is not used explicitly, ALCHEMY performs well on small sample sizes with accuracy exceeding 99% with as few as 18 samples. Availability: ALCHEMY is available for both commercial and academic use free of charge and distributed under the GNU General Public License at http://alchemy.sourceforge.net/ Contact: mhw6@cornell.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20926420

  6. Evaluation of the molecular lipid organization in millimeter-sized stratum corneum by synchrotron X-ray diffraction.

    PubMed

    Suzuki, T; Uchino, T; Hatta, I; Miyazaki, Y; Kato, S; Sasaki, K; Kagawa, Y

    2018-04-29

    The aim of this study was to investigate whether the lamellar and lateral structure of intercellular lipid of stratum corneum (SC) can be evaluated from millimeter-sized SC (MSC) by X-ray diffraction. A 12 mm × 12 mm SC sheet from hairless mouse was divided into 16 pieces measuring 3 mm × 3 mm square. From another sheet, 4 pieces of ultramillimeter-sized SC (USC:1.5 mm × 1.5 mm square) were prepared. Small and wide-angle X-ray diffraction (SAXD and WAXD) measurements were performed on each piece. For MSC and USC, changes in the lamellar and lateral structure after the application of d-limonene were measured. The intensity of SAXD peaks due to the lamellar phase of long periodicity phase (LPP) and WAXD peaks due to the lateral hydrocarbon chain-packing structures varied in MSC and USC pieces, although over the 12 mm × 12 mm SC sheet. These results indicated that the intercellular lipid components and their proportion appeared nearly uniform. Application of d-limonene on MSC and USC piece with strong peaks in SAXD and the WAXD resulted in the disappearance of peaks due to the lamellar phase of LPP and decrease in peak intensity for the lateral hydrocarbon chain-packing structures. These changes are consistent with normal-sized sample results. We found that the selection of a sample piece with strong diffraction peaks due to the lamellar and lateral structure enabled evaluation of the SC structure in small-sized samples by X-ray diffraction. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. SOME ENGINEERING PROPERTIES OF SHELLED AND KERNEL TEA (Camellia sinensis) SEEDS.

    PubMed

    Altuntas, Ebubekir; Yildiz, Merve

    2017-01-01

    Camellia sinensis is the source of tea leaves and it is an economic crop now grown around the World. Tea seed oil has been used for cooking in China and other Asian countries for more than a thousand years. Tea is the most widely consumed beverages after water in the world. It is mainly produced in Asia, central Africa, and exported throughout the World. Some engineering properties (size dimensions, sphericity, volume, bulk and true densities, friction coefficient, colour characteristics and mechanical behaviour as rupture force of shelled and kernel tea ( Camellia sinensis ) seeds were determined in this study. This research was carried out for shelled and kernel tea seeds. The shelled tea seeds used in this study were obtained from East-Black Sea Tea Cooperative Institution in Rize city of Turkey. Shelled and kernel tea seeds were characterized as large and small sizes. The average geometric mean diameter and seed mass of the shelled tea seeds were 15.8 mm, 10.7 mm (large size); 1.47 g, 0.49 g (small size); while the average geometric mean diameter and seed mass of the kernel tea seeds were 11.8 mm, 8 mm for large size; 0.97 g, 0.31 g for small size, respectively. The sphericity, surface area and volume values were found to be higher in a larger size than small size for the shelled and kernel tea samples. The shelled tea seed's colour intensity (Chroma) were found between 59.31 and 64.22 for large size, while the kernel tea seed's chroma values were found between 56.04 68.34 for large size, respectively. The rupture force values of kernel tea seeds were higher than shelled tea seeds for the large size along X axis; whereas, the rupture force values of along X axis were higher than Y axis for large size of shelled tea seeds. The static coefficients of friction of shelled and kernel tea seeds for the large and small sizes higher values for rubber than the other friction surfaces. Some engineering properties, such as geometric mean diameter, sphericity, volume, bulk and true densities, the coefficient of friction, L*, a*, b* colour characteristics and rupture force of shelled and kernel tea ( Camellia sinensis ) seeds will serve to design the equipment used in postharvest treatments.

  8. Differential Risk of Injury to Child Occupants by SUV Size

    PubMed Central

    Kallan, Michael J.; Durbin, Dennis R.; Elliott, Michael R.; Arbogast, Kristy B.; Winston, Flaura K.

    2004-01-01

    In the United States, the sport utility vehicle (SUV) is the fastest growing segment of the passenger vehicle fleet, yet SUVs vary widely in size and crashworthiness. Using data collected from a population-based sample of crashes in insured vehicles, we quantified the risk of injury to child occupants in SUVs by vehicle weight. There is an increased risk in both Small and Midsize SUVs when compared to Large SUVs. Parents who are purchasing a SUV should strongly consider the size of the vehicle and its crashworthiness. PMID:15319119

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Thomas Martin; Patton, Bruce W.; Weber, Charles F.

    The primary goal of this project is to evaluate x-ray spectra generated within a scanning electron microscope (SEM) to determine elemental composition of small samples. This will be accomplished by performing Monte Carlo simulations of the electron and photon interactions in the sample and in the x-ray detector. The elemental inventories will be determined by an inverse process that progressively reduces the difference between the measured and simulated x-ray spectra by iteratively adjusting composition and geometric variables in the computational model. The intended benefit of this work will be to develop a method to perform quantitative analysis on substandard samplesmore » (heterogeneous phases, rough surfaces, small sizes, etc.) without involving standard elemental samples or empirical matrix corrections (i.e., true standardless quantitative analysis).« less

  10. An inexpensive and portable microvolumeter for rapid evaluation of biological samples.

    PubMed

    Douglass, John K; Wcislo, William T

    2010-08-01

    We describe an improved microvolumeter (MVM) for rapidly measuring volumes of small biological samples, including live zooplankton, embryos, and small animals and organs. Portability and low cost make this instrument suitable for widespread use, including at remote field sites. Beginning with Archimedes' principle, which states that immersing an arbitrarily shaped sample in a fluid-filled container displaces an equivalent volume, we identified procedures that maximize measurement accuracy and repeatability across a broad range of absolute volumes. Crucial steps include matching the overall configuration to the size of the sample, using reflected light to monitor fluid levels precisely, and accounting for evaporation during measurements. The resulting precision is at least 100 times higher than in previous displacement-based methods. Volumes are obtained much faster than by traditional histological or confocal methods and without shrinkage artifacts due to fixation or dehydration. Calibrations using volume standards confirmed accurate measurements of volumes as small as 0.06 microL. We validated the feasibility of evaluating soft-tissue samples by comparing volumes of freshly dissected ant brains measured with the MVM and by confocal reconstruction.

  11. Observational studies of patients in the emergency department: a comparison of 4 sampling methods.

    PubMed

    Valley, Morgan A; Heard, Kennon J; Ginde, Adit A; Lezotte, Dennis C; Lowenstein, Steven R

    2012-08-01

    We evaluate the ability of 4 sampling methods to generate representative samples of the emergency department (ED) population. We analyzed the electronic records of 21,662 consecutive patient visits at an urban, academic ED. From this population, we simulated different models of study recruitment in the ED by using 2 sample sizes (n=200 and n=400) and 4 sampling methods: true random, random 4-hour time blocks by exact sample size, random 4-hour time blocks by a predetermined number of blocks, and convenience or "business hours." For each method and sample size, we obtained 1,000 samples from the population. Using χ(2) tests, we measured the number of statistically significant differences between the sample and the population for 8 variables (age, sex, race/ethnicity, language, triage acuity, arrival mode, disposition, and payer source). Then, for each variable, method, and sample size, we compared the proportion of the 1,000 samples that differed from the overall ED population to the expected proportion (5%). Only the true random samples represented the population with respect to sex, race/ethnicity, triage acuity, mode of arrival, language, and payer source in at least 95% of the samples. Patient samples obtained using random 4-hour time blocks and business hours sampling systematically differed from the overall ED patient population for several important demographic and clinical variables. However, the magnitude of these differences was not large. Common sampling strategies selected for ED-based studies may affect parameter estimates for several representative population variables. However, the potential for bias for these variables appears small. Copyright © 2012. Published by Mosby, Inc.

  12. Brief Report: Accuracy and Response Time for the Recognition of Facial Emotions in a Large Sample of Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Fink, Elian; de Rosnay, Marc; Wierda, Marlies; Koot, Hans M.; Begeer, Sander

    2014-01-01

    The empirical literature has presented inconsistent evidence for deficits in the recognition of basic emotion expressions in children with autism spectrum disorders (ASD), which may be due to the focus on research with relatively small sample sizes. Additionally, it is proposed that although children with ASD may correctly identify emotion…

  13. Resolution of molecular weight distributions in slightly pyrolyzed cellulose using the weibull function

    Treesearch

    A. Broido; Hsiukang Yow

    1977-01-01

    Even before weight loss in the low-temperature pyrolysis of cellulose becomes significant, the average degree of polymerization of the partially pyrolyzed samples drops sharply. The gel permeation chromatograms of nitrated derivatives of the samples can be described in terms of a small number of mixed size populations—each component fitted within reasonable limits by a...

  14. Key Roles of Size and Crystallinity of Nanosized Iron Hydr(oxides) Stabilized by Humic Substances in Iron Bioavailability to Plants.

    PubMed

    Kulikova, Natalia A; Polyakov, Alexander Yu; Lebedev, Vasily A; Abroskin, Dmitry P; Volkov, Dmitry S; Pankratov, Denis A; Klein, Olga I; Senik, Svetlana V; Sorkina, Tatiana A; Garshev, Alexey V; Veligzhanin, Alexey A; Garcia Mina, Jose M; Perminova, Irina V

    2017-12-27

    Availability of Fe in soil to plants is closely related to the presence of humic substances (HS). Still, the systematic data on applicability of iron-based nanomaterials stabilized with HS as a source for plant nutrition are missing. The goal of our study was to establish a connection between properties of iron-based materials stabilized by HS and their bioavailability to plants. We have prepared two samples of leonardite HS-stabilized iron-based materials with substantially different properties using the reported protocols and studied their physical chemical state in relation to iron uptake and other biological effects. We used Mössbauer spectroscopy, XRD, SAXS, and TEM to conclude on iron speciation, size, and crystallinity. One material (Fe-HA) consisted of polynuclear iron(III) (hydr)oxide complexes, so-called ferric polymers, distributed in HS matrix. These complexes are composed of predominantly amorphous small-size components (<5 nm) with inclusions of larger crystalline particles (the mean size of (11 ± 4) nm). The other material was composed of well-crystalline feroxyhyte (δ'-FeOOH) NPs with mean transverse sizes of (35 ± 20) nm stabilized by small amounts of HS. Bioavailability studies were conducted on wheat plants under conditions of iron deficiency. The uptake studies have shown that small and amorphous ferric polymers were readily translocated into the leaves on the level of Fe-EDTA, whereas relatively large and crystalline feroxyhyte NPs were mostly sorbed on the roots. The obtained data are consistent with the size exclusion limits of cell wall pores (5-20 nm). Both samples demonstrated distinct beneficial effects with respect to photosynthetic activity and lipid biosynthesis. The obtained results might be of use for production of iron-based nanomaterials stabilized by HS with the tailored iron availability to plants. They can be applied as the only source for iron nutrition as well as in combination with the other elements, for example, for industrial production of "nanofortified" macrofertilizers (NPK).

  15. A risk assessment method for multi-site damage

    NASA Astrophysics Data System (ADS)

    Millwater, Harry Russell, Jr.

    This research focused on developing probabilistic methods suitable for computing small probabilities of failure, e.g., 10sp{-6}, of structures subject to multi-site damage (MSD). MSD is defined as the simultaneous development of fatigue cracks at multiple sites in the same structural element such that the fatigue cracks may coalesce to form one large crack. MSD is modeled as an array of collinear cracks with random initial crack lengths with the centers of the initial cracks spaced uniformly apart. The data used was chosen to be representative of aluminum structures. The structure is considered failed whenever any two adjacent cracks link up. A fatigue computer model is developed that can accurately and efficiently grow a collinear array of arbitrary length cracks from initial size until failure. An algorithm is developed to compute the stress intensity factors of all cracks considering all interaction effects. The probability of failure of two to 100 cracks is studied. Lower bounds on the probability of failure are developed based upon the probability of the largest crack exceeding a critical crack size. The critical crack size is based on the initial crack size that will grow across the ligament when the neighboring crack has zero length. The probability is evaluated using extreme value theory. An upper bound is based on the probability of the maximum sum of initial cracks being greater than a critical crack size. A weakest link sampling approach is developed that can accurately and efficiently compute small probabilities of failure. This methodology is based on predicting the weakest link, i.e., the two cracks to link up first, for a realization of initial crack sizes, and computing the cycles-to-failure using these two cracks. Criteria to determine the weakest link are discussed. Probability results using the weakest link sampling method are compared to Monte Carlo-based benchmark results. The results indicate that very small probabilities can be computed accurately in a few minutes using a Hewlett-Packard workstation.

  16. The quantitative impact of the mesopore size on the mass transfer mechanism of the new 1.9μm fully porous Titan-C18 particles. I: analysis of small molecules.

    PubMed

    Gritti, Fabrice; Guiochon, Georges

    2015-03-06

    Previous data have shown that could deliver a minimum reduced plate height as small as 1.7. Additionally, the reduction of the mesopore size after C18 derivatization and the subsequent restriction for sample diffusivity across the Titan-C18 particles were found responsible for the unusually small value of the experimental optimum reduced velocity (5 versus 10 for conventional particles) and for the large values of the average reduced solid-liquid mass transfer resistance coefficients (0.032 versus 0.016) measured for a series of seven n-alkanophenones. The improvements in column efficiency made by increasing the average mesopore size of the Titan silica from 80 to 120Å are investigated from a quantitative viewpoint based on the accurate measurements of the reduced coefficients (longitudinal diffusion, trans-particle mass transfer resistance, and eddy diffusion) and of the intra-particle diffusivity, pore, and surface diffusion for the same series of n-alkanophenone compounds. The experimental results reveal an increase (from 0% to 30%) of the longitudinal diffusion coefficients for the same sample concentration distribution (from 0.25 to 4) between the particle volume and the external volume of the column, a 40% increase of the intra-particle diffusivity for the same sample distribution (from 1 to 7) between the particle skeleton volume and the bulk phase, and a 15-30% decrease of the solid-liquid mass transfer coefficient for the n-alkanophenone compounds. Pore and surface diffusion are increased by 60% and 20%, respectively. The eddy dispersion term and the maximum column efficiency (295000plates/m) remain virtually unchanged. The rate of increase of the total plate height with increasing the chromatographic speed is reduced by 20% and it is mostly controlled (75% and 70% for 80 and 120Å pore size) by the flow rate dependence of the eddy dispersion term. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Portrait of a small population of boreal toads (anaxyrus boreas)

    USGS Publications Warehouse

    Muths, E.; Scherer, R. D.

    2011-01-01

    Much attention has been given to the conservation of small populations, those that are small because of decline, and those that are naturally small. Small populations are of particular interest because ecological theory suggests that they are vulnerable to the deleterious effects of environmental, demographic, and genetic stochasticity as well as natural and human-induced catastrophes. However, testing theory and developing applicable conservation measures for small populations is hampered by sparse data. This lack of information is frequently driven by computational issues with small data sets that can be confounded by the impacts of stressors. We present estimates of demographic parameters from a small population of Boreal Toads (Anaxyrus boreas) that has been surveyed since 2001 by using capturerecapture methods. Estimates of annual adult survival probability are high relative to other Boreal Toad populations, whereas estimates of recruitment rate are low. Despite using simple models, clear patterns emerged from the analyses, suggesting that population size is constrained by low recruitment of adults and is declining slowly. These patterns provide insights that are useful in developing management directions for this small population, and this study serves as an example of the potential for small populations to yield robust and useful information despite sample size constraints. ?? 2011 The Herpetologists' League, Inc.

  18. Speciation of Se and DOC in soil solution and their relation to Se bioavailability.

    PubMed

    Weng, Liping; Vega, Flora Alonso; Supriatin, Supriatin; Bussink, Wim; Van Riemsdijk, Willem H

    2011-01-01

    A 0.01 M CaCl(2) extraction is often used to asses the bioavailability of plant nutrients in soils. However, almost no correlation was found between selenium (Se) in the soil extraction and Se content in grass. The recently developed anion Donnan membrane technique was used to analyze chemical speciation of Se in the 0.01 M CaCl(2) extractions of grassland soils and fractionation of DOC (dissolved organic carbon). The results show that most of Se (67-86%) in the extractions (15 samples) are colloidal-sized Se. Only 13-34% of extractable Se are selenate, selenite and small organic Se (<1 nm). Colloidal Se is, most likely, Se bound to or incorporated in colloidal-sized organic matter. The dominant form of small Se compounds (selenate, selenite/small organic compounds) depends on soil. A total of 47-85% of DOC is colloidal-sized and 15-53% are small organic molecules (<1 nm). In combination with soluble S (sulfur) and/or P (phosphor), concentration of small DOC can explain most of the variability of Se content in grass. The results indicate that mineralization of organic Se is the most important factor that controls Se availability in soils. Competition with sulfate and phosphate needs to be taken into account. Further research is needed to verify if concentration of small DOC is a good indicator of mineralization of soil organic matter.

  19. Semantic size does not matter: "bigger" words are not recognized faster.

    PubMed

    Kang, Sean H K; Yap, Melvin J; Tse, Chi-Shing; Kurby, Christopher A

    2011-06-01

    Sereno, O'Donnell, and Sereno (2009) reported that words are recognized faster in a lexical decision task when their referents are physically large than when they are small, suggesting that "semantic size" might be an important variable that should be considered in visual word recognition research and modelling. We sought to replicate their size effect, but failed to find a significant latency advantage in lexical decision for "big" words (cf. "small" words), even though we used the same word stimuli as Sereno et al. and had almost three times as many subjects. We also examined existing data from visual word recognition megastudies (e.g., English Lexicon Project) and found that semantic size is not a significant predictor of lexical decision performance after controlling for the standard lexical variables. In summary, the null results from our lab experiment--despite a much larger subject sample size than Sereno et al.--converged with our analysis of megastudy lexical decision performance, leading us to conclude that semantic size does not matter for word recognition. Discussion focuses on why semantic size (unlike some other semantic variables) is unlikely to play a role in lexical decision.

  20. Chandra Observations of Three Newly Discovered Quadruply Gravitationally Lensed Quasars

    NASA Astrophysics Data System (ADS)

    Pooley, David

    2017-09-01

    Our previous work has shown the unique power of Chandra observations of quadruply gravitationally lensed quasars to address several fundamental astrophysical issues. We have used these observations to (1) determine the cause of flux ratio anomalies, (2) measure the sizes of quasar accretion disks, (3) determine the dark matter content of the lensing galaxies, and (4) measure the stellar mass-to-light ratio (in fact, this is the only way to measure the stellar mass-to-light ratio beyond the solar neighborhood). In all cases, the main source of uncertainty in our results is the small size of the sample of known quads; only 15 systems are available for study with Chandra. We propose Chandra observations of three newly discovered quads, increasing the sample size by 20%

  1. Surface degassing and modifications to vesicle size distributions in active basalt flows

    USGS Publications Warehouse

    Cashman, K.V.; Mangan, M.T.; Newman, S.

    1994-01-01

    The character of the vesicle population in lava flows includes several measurable parameters that may provide important constraints on lava flow dynamics and rheology. Interpretation of vesicle size distributions (VSDs), however, requires an understanding of vesiculation processes in feeder conduits, and of post-eruption modifications to VSDs during transport and emplacement. To this end we collected samples from active basalt flows at Kilauea Volcano: (1) near the effusive Kupaianaha vent; (2) through skylights in the approximately isothermal Wahaula and Kamoamoa tube systems transporting lava to the coast; (3) from surface breakouts at different locations along the lava tubes; and (4) from different locations in a single breakout from a lava tube 1 km from the 51 vent at Pu'u 'O'o. Near-vent samples are characterized by VSDs that show exponentially decreasing numbers of vesicles with increasing vesicle size. These size distributions suggest that nucleation and growth of bubbles were continuous during ascent in the conduit, with minor associated bubble coalescence resulting from differential bubble rise. The entire vesicle population can be attributed to shallow exsolution of H2O-dominated gases at rates consistent with those predicted by simple diffusion models. Measurements of H2O, CO2 and S in the matrix glass show that the melt equilibrated rapidly at atmospheric pressure. Down-tube samples maintain similar VSD forms but show a progressive decrease in both overall vesicularity and mean vesicle size. We attribute this change to open system, "passive" rise and escape of larger bubbles to the surface. Such gas loss from the tube system results in the output of 1.2 ?? 106 g/day SO2, an output representing an addition of approximately 1% to overall volatile budget calculations. A steady increase in bubble number density with downstream distance is best explained by continued bubble nucleation at rates of 7-8/cm3s. Rates are ???25% of those estimated from the vent samples, and thus represent volatile supersaturations considerably less than those of the conduit. We note also that the small total volume represented by this new bubble population does not: (1) measurably deplete the melt in volatiles; or (2) make up for the overall vesicularity decrease resulting from the loss of larger bubbles. Surface breakout samples have distinctive VSDs characterized by an extreme depletion in the small vesicle population. This results in samples with much lower number densities and larger mean vesicle sizes than corresponding tube samples. Similar VSD patterns have been observed in solidified lava flows and are interpreted to result from either static (wall rupture) or dynamic (bubble rise and capture) coalescence. Through comparison with vent and tube vesicle populations, we suggest that, in addition to coalescence, the observed vesicle populations in the breakout samples have experienced a rapid loss of small vesicles consistent with 'ripening' of the VSD resulting from interbubble diffusion of volatiles. Confinement of ripening features to surface flows suggests that the thin skin that forms on surface breakouts may play a role in the observed VSD modification. ?? 1994.

  2. Intuitive statistics by 8-month-old infants

    PubMed Central

    Xu, Fei; Garcia, Vashti

    2008-01-01

    Human learners make inductive inferences based on small amounts of data: we generalize from samples to populations and vice versa. The academic discipline of statistics formalizes these intuitive statistical inferences. What is the origin of this ability? We report six experiments investigating whether 8-month-old infants are “intuitive statisticians.” Our results showed that, given a sample, the infants were able to make inferences about the population from which the sample had been drawn. Conversely, given information about the entire population of relatively small size, the infants were able to make predictions about the sample. Our findings provide evidence that infants possess a powerful mechanism for inductive learning, either using heuristics or basic principles of probability. This ability to make inferences based on samples or information about the population develops early and in the absence of schooling or explicit teaching. Human infants may be rational learners from very early in development. PMID:18378901

  3. Statistical inference involving binomial and negative binomial parameters.

    PubMed

    García-Pérez, Miguel A; Núñez-Antón, Vicente

    2009-05-01

    Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.

  4. Integrated approaches for reducing sample size for measurements of trace elemental impurities in plutonium by ICP-OES and ICP-MS

    DOE PAGES

    Xu, Ning; Chamberlin, Rebecca M.; Thompson, Pam; ...

    2017-10-07

    This study has demonstrated that bulk plutonium chemical analysis can be performed at small scales (\\50 mg material) through three case studies. Analytical methods were developed for ICP-OES and ICP-MS instruments to measure trace impurities and gallium content in plutonium metals with comparable or improved detection limits, measurement accuracy and precision. In two case studies, the sample size has been reduced by 109, and in the third case study, by as much as 50009, so that the plutonium chemical analysis can be performed in a facility rated for lower-hazard and lower-security operations.

  5. Integrated approaches for reducing sample size for measurements of trace elemental impurities in plutonium by ICP-OES and ICP-MS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Ning; Chamberlin, Rebecca M.; Thompson, Pam

    This study has demonstrated that bulk plutonium chemical analysis can be performed at small scales (\\50 mg material) through three case studies. Analytical methods were developed for ICP-OES and ICP-MS instruments to measure trace impurities and gallium content in plutonium metals with comparable or improved detection limits, measurement accuracy and precision. In two case studies, the sample size has been reduced by 109, and in the third case study, by as much as 50009, so that the plutonium chemical analysis can be performed in a facility rated for lower-hazard and lower-security operations.

  6. 75 FR 12003 - Investing in Innovation Fund

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-12

    ..., Proposed Practice, Strategy, or implemented experimental implemented strategy, or program, Program. study or well-designed experimental or quasi- or one similar to it, and well-implemented experimental study, has been attempted quasi-experimental with small sample sizes previously, albeit on a study; or (2...

  7. Growth in Head Size during Infancy: Implications for Sound Localization.

    ERIC Educational Resources Information Center

    Clifton, Rachel K.; And Others

    1988-01-01

    Compared head circumference and interaural distance in infants between birth and 22 weeks of age and in a small sample of preschool children and adults. Calculated changes in interaural time differences according to age. Found a large shift in distance. (SKC)

  8. RESIDENTIAL INDOOR EXPOSURES OF CHILDREN TO PESTICIDES FOLLOWING LAWN APPLICATIONS

    EPA Science Inventory

    Methods have been developed to estimate children's residential exposures to pesticide residues and applied in a small field study of indoor exposures resulting from the intrusion of lawn-applied herbicide into the home. Sampling methods included size-selective indoor air sampli...

  9. The effective elastic properties of human trabecular bone may be approximated using micro-finite element analyses of embedded volume elements.

    PubMed

    Daszkiewicz, Karol; Maquer, Ghislain; Zysset, Philippe K

    2017-06-01

    Boundary conditions (BCs) and sample size affect the measured elastic properties of cancellous bone. Samples too small to be representative appear stiffer under kinematic uniform BCs (KUBCs) than under periodicity-compatible mixed uniform BCs (PMUBCs). To avoid those effects, we propose to determine the effective properties of trabecular bone using an embedded configuration. Cubic samples of various sizes (2.63, 5.29, 7.96, 10.58 and 15.87 mm) were cropped from [Formula: see text] scans of femoral heads and vertebral bodies. They were converted into [Formula: see text] models and their stiffness tensor was established via six uniaxial and shear load cases. PMUBCs- and KUBCs-based tensors were determined for each sample. "In situ" stiffness tensors were also evaluated for the embedded configuration, i.e. when the loads were transmitted to the samples via a layer of trabecular bone. The Zysset-Curnier model accounting for bone volume fraction and fabric anisotropy was fitted to those stiffness tensors, and model parameters [Formula: see text] (Poisson's ratio) [Formula: see text] and [Formula: see text] (elastic and shear moduli) were compared between sizes. BCs and sample size had little impact on [Formula: see text]. However, KUBCs- and PMUBCs-based [Formula: see text] and [Formula: see text], respectively, decreased and increased with growing size, though convergence was not reached even for our largest samples. Both BCs produced upper and lower bounds for the in situ values that were almost constant across samples dimensions, thus appearing as an approximation of the effective properties. PMUBCs seem also appropriate for mimicking the trabecular core, but they still underestimate its elastic properties (especially in shear) even for nearly orthotropic samples.

  10. Full-field transmission x-ray imaging with confocal polycapillary x-ray optics

    PubMed Central

    Sun, Tianxi; MacDonald, C. A.

    2013-01-01

    A transmission x-ray imaging setup based on a confocal combination of a polycapillary focusing x-ray optic followed by a polycapillary collimating x-ray optic was designed and demonstrated to have good resolution, better than the unmagnified pixel size and unlimited by the x-ray tube spot size. This imaging setup has potential application in x-ray imaging for small samples, for example, for histology specimens. PMID:23460760

  11. Trophic accumulation of PSP toxins in zooplankton during Alexandrium fundyense blooms in Casco Bay, Gulf of Maine, April-June 1998. II. . Zooplankton abundance and size-fractionated community composition

    NASA Astrophysics Data System (ADS)

    Turner, Jefferson T.; Doucette, Gregory J.; Keafer, Bruce A.; Anderson, Donald M.

    2005-09-01

    During spring blooms of the toxic dinoflagellate Alexandrium fundyense in Casco Bay, Maine in 1998, we investigated vectorial intoxication of various zooplankton size fractions with PSP toxins, including zooplankton community composition from quantitative zooplankton samples (>102 μm), as well as zooplankton composition in relation to toxin levels in various size fractions (20-64, 64-100, 100-200, 200-500, >500 μm). Zooplankton abundance in 102 μm mesh samples was low (most values<10,000 animals m -3) from early April through early May, but increased to maxima in mid-June (cruise mean=121,500 animals m -3). Quantitative zooplankton samples (>102 μm) were dominated by copepod nauplii, and Oithona similis copepodites and adults at most locations except for those furthest inshore. At these inshore locations, Acartia hudsonica copepodites and adults were usually dominant. Larger copepods such as Calanus finmarchicus, Centropages typicus, and Pseudocalanus spp. were found primarily offshore, and at much lower abundances than O. similis. Rotifers, mainly present from late April to late May, were most abundant inshore. The marine cladoceran Evadne nordmani was sporadically abundant, particularly in mid-June. Microplankton in 20-64 μm size fractions was generally dominated by A. fundyense, non-toxic dinoflagellates, and tintinnids. Microplankton in 64-100 μm size fractions was generally dominated by larger non-toxic dinoflagellates, tintinnids, aloricate ciliates, and copepod nauplii, and in early May, rotifers. Some samples (23%) in the 64-100 μm size fractions contained abundant cells of A. fundyense, presumably due to sieve clogging, but most did not contain A. fundyense cells. This suggests that PSP toxin levels in those samples were due to vectorial intoxication of microzooplankters such as heterotrophic dinoflagellates, tintinnids, aloricate ciliates, rotifers, and copepod nauplii via feeding on A. fundyense cells. Dominant taxa in zooplankton fractions varied in size. Samples in the 100-200 μm size fraction were overwhelmingly dominated in most cases by copepod nauplii and small copepodites of O. similis, and during late May, rotifers. Samples in the 200-500 μm size fraction contained fewer copepod nauplii, but progressively more copepodites and adults of O. similis, particularly at offshore locations. At the most inshore stations, copepodites and adults of A. hudsonica were usual dominants. There were few copepod nauplii or O. similis in the>500 μm size fraction, which was usually dominated by copepodites and adults of C. finmarchicus, C. typicus, and Pseudocalanus spp. at offshore locations, and A. hudsonica inshore. Most of the higher PSP toxin concentrations were found in the larger zooplankton size fractions that were dominated by larger copepods such as C. finmarchicus and C. typicus. In contrast to our earlier findings, elevated toxin levels were also measured in numerous samples from smaller zooplankton size fractions, dominated by heterotrophic dinoflagellates, tintinnids and aloricate ciliates, rotifers, copepod nauplii, and smaller copepods such as O. similis and, at the most inshore locations, A. hudsonica. Thus, our data suggest that ingested PSP toxins are widespread throughout the zooplankton grazing community, and that potential vectors for intoxication of zooplankton assemblages include heterotrophic dinoflagellates, rotifers, protozoans, copepod nauplii, and small copepods.

  12. Temperature dependence of the size distribution function of InAs quantum dots on GaAs(001)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arciprete, F.; Fanfoni, M.; Patella, F.

    2010-04-15

    We present a detailed atomic-force-microscopy study of the effect of annealing on InAs/GaAs(001) quantum dots grown by molecular-beam epitaxy. Samples were grown at a low growth rate at 500 deg. C with an InAs coverage slightly greater than critical thickness and subsequently annealed at several temperatures. We find that immediately quenched samples exhibit a bimodal size distribution with a high density of small dots (<50 nm{sup 3}) while annealing at temperatures greater than 420 deg. C leads to a unimodal size distribution. This result indicates a coarsening process governing the evolution of the island size distribution function which is limitedmore » by the attachment-detachment of the adatoms at the island boundary. At higher temperatures one cannot ascribe a single rate-determining step for coarsening because of the increased role of adatom diffusion. However, for long annealing times at 500 deg. C the island size distribution is strongly affected by In desorption.« less

  13. Determination of Minimum Training Sample Size for Microarray-Based Cancer Outcome Prediction–An Empirical Assessment

    PubMed Central

    Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu

    2013-01-01

    The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920

  14. Application of Diffusion Tensor Imaging Parameters to Detect Change in Longitudinal Studies in Cerebral Small Vessel Disease.

    PubMed

    Zeestraten, Eva Anna; Benjamin, Philip; Lambert, Christian; Lawrence, Andrew John; Williams, Owen Alan; Morris, Robin Guy; Barrick, Thomas Richard; Markus, Hugh Stephen

    2016-01-01

    Cerebral small vessel disease (SVD) is the major cause of vascular cognitive impairment, resulting in significant disability and reduced quality of life. Cognitive tests have been shown to be insensitive to change in longitudinal studies and, therefore, sensitive surrogate markers are needed to monitor disease progression and assess treatment effects in clinical trials. Diffusion tensor imaging (DTI) is thought to offer great potential in this regard. Sensitivity of the various parameters that can be derived from DTI is however unknown. We aimed to evaluate the differential sensitivity of DTI markers to detect SVD progression, and to estimate sample sizes required to assess therapeutic interventions aimed at halting decline based on DTI data. We investigated 99 patients with symptomatic SVD, defined as clinical lacunar syndrome with MRI confirmation of a corresponding infarct as well as confluent white matter hyperintensities over a 3 year follow-up period. We evaluated change in DTI histogram parameters using linear mixed effect models and calculated sample size estimates. Over a three-year follow-up period we observed a decline in fractional anisotropy and increase in diffusivity in white matter tissue and most parameters changed significantly. Mean diffusivity peak height was the most sensitive marker for SVD progression as it had the smallest sample size estimate. This suggests disease progression can be monitored sensitively using DTI histogram analysis and confirms DTI's potential as surrogate marker for SVD.

  15. Can mindfulness-based interventions influence cognitive functioning in older adults? A review and considerations for future research.

    PubMed

    Berk, Lotte; van Boxtel, Martin; van Os, Jim

    2017-11-01

    An increased need exists to examine factors that protect against age-related cognitive decline. There is preliminary evidence that meditation can improve cognitive function. However, most studies are cross-sectional and examine a wide variety of meditation techniques. This review focuses on the standard eight-week mindfulness-based interventions (MBIs) such as mindfulness-based stress reduction (MBSR) and mindfulness-based cognitive therapy (MBCT). We searched the PsychINFO, CINAHL, Web of Science, COCHRANE, and PubMed databases to identify original studies investigating the effects of MBI on cognition in older adults. Six reports were included in the review of which three were randomized controlled trials. Studies reported preliminary positive effects on memory, executive function and processing speed. However, most reports had a high risk of bias and sample sizes were small. The only study with low risk of bias, large sample size and active control group reported no significant findings. We conclude that eight-week MBI for older adults are feasible, but results on cognitive improvement are inconclusive due a limited number of studies, small sample sizes, and a high risk of bias. Rather than a narrow focus on cognitive training per se, future research may productively shift to investigate MBI as a tool to alleviate suffering in older adults, and to prevent cognitive problems in later life already in younger target populations.

  16. Respondent-driven sampling and the recruitment of people with small injecting networks.

    PubMed

    Paquette, Dana; Bryant, Joanne; de Wit, John

    2012-05-01

    Respondent-driven sampling (RDS) is a form of chain-referral sampling, similar to snowball sampling, which was developed to reach hidden populations such as people who inject drugs (PWID). RDS is said to reach members of a hidden population that may not be accessible through other sampling methods. However, less attention has been paid as to whether there are segments of the population that are more likely to be missed by RDS. This study examined the ability of RDS to capture people with small injecting networks. A study of PWID, using RDS, was conducted in 2009 in Sydney, Australia. The size of participants' injecting networks was examined by recruitment chain and wave. Participants' injecting network characteristics were compared to those of participants from a separate pharmacy-based study. A logistic regression analysis was conducted to examine the characteristics independently associated with having small injecting networks, using the combined RDS and pharmacy-based samples. In comparison with the pharmacy-recruited participants, RDS participants were almost 80% less likely to have small injecting networks, after adjusting for other variables. RDS participants were also more likely to have their injecting networks form a larger proportion of those in their social networks, and to have acquaintances as part of their injecting networks. Compared to those with larger injecting networks, individuals with small injecting networks were equally likely to engage in receptive sharing of injecting equipment, but less likely to have had contact with prevention services. These findings suggest that those with small injecting networks are an important group to recruit, and that RDS is less likely to capture these individuals.

  17. Small target pre-detection with an attention mechanism

    NASA Astrophysics Data System (ADS)

    Wang, Yuehuan; Zhang, Tianxu; Wang, Guoyou

    2002-04-01

    We introduce the concept of predetection based on an attention mechanism to improve the efficiency of small-target detection by limiting the image region of detection. According to the characteristics of small-target detection, local contrast is taken as the only feature in predetection and a nonlinear sampling model is adopted to make the predetection adaptive to detect small targets with different area sizes. To simplify the predetection itself and decrease the false alarm probability, neighboring nodes in the sampling grid are used to generate a saliency map, and a short-term memory is adopted to accelerate the `pop-out' of targets. We discuss the fact that the proposed approach is simple enough in computational complexity. In addition, even in a cluttered background, attention can be led to targets in a satisfying few iterations, which ensures that the detection efficiency will not be decreased due to false alarms. Experimental results are presented to demonstrate the applicability of the approach.

  18. Effects of depth and crayfish size on predation risk and foraging profitability of a lotic crayfish

    USGS Publications Warehouse

    Flinders, C.A.; Magoulick, D.D.

    2007-01-01

    We conducted field surveys and experiments to determine whether observed distributions of crayfish among habitats were influenced by differential resource availability, foraging profitability, and predation rates and whether these factors differed with crayfish size and habitat depth. We sampled available food resources (detritus and invertebrates) and shelter as rock substrate in deep (>50 cm) and shallow (<30 cm) habitats. We used an enclosure-exclosure experiment to examine the effects of water depth and crayfish size on crayfish biomass and survival, and to determine whether these factors affected silt accrual, algal abundance (chlorophyll a [chl a]), and detritus and invertebrate biomass (g ash-free dry mass) differently from enclosures without crayfish. We conducted tethering experiments to assess predation on small (13-17 mm carapace length [CL]) and large (23-30 mm CL) Orconectes marchandi and to determine whether predation rates differed with water depth. Invertebrate biomass was significantly greater in shallow water than in deep water, whereas detritus biomass did not differ significantly between depths. Cobble was significantly more abundant in shallow than in deep water. Depth and crayfish size had a significant interactive effect on change in size of enclosed crayfish when CL was used as a measure of size but not when biomass was used as a measure of size. CL of small crayfish increased significantly more in enclosures in shallow than in deep water, but CL of large crayfish changed very little at either depth. Silt, chl a, and detritus biomass were significantly lower on tiles in large- than in small- and no-crayfish enclosures, and invertebrate biomass was significantly lower in large- than in no-crayfish enclosures. Significantly more crayfish were consumed in deep than in shallow water regardless of crayfish size. Our results suggest that predation and resource availability might influence the depth distribution of small and large crayfish. Small crayfish grew faster in shallow habitats where they might have had a fitness advantage caused by high prey availability and reduced predation risk. Size-dependent reduction of silt by crayfish might influence benthic habitats where large crayfish are abundant. ?? 2007 by The North American Benthological Society.

  19. Diagnostic test accuracy and prevalence inferences based on joint and sequential testing with finite population sampling.

    PubMed

    Su, Chun-Lung; Gardner, Ian A; Johnson, Wesley O

    2004-07-30

    The two-test two-population model, originally formulated by Hui and Walter, for estimation of test accuracy and prevalence estimation assumes conditionally independent tests, constant accuracy across populations and binomial sampling. The binomial assumption is incorrect if all individuals in a population e.g. child-care centre, village in Africa, or a cattle herd are sampled or if the sample size is large relative to population size. In this paper, we develop statistical methods for evaluating diagnostic test accuracy and prevalence estimation based on finite sample data in the absence of a gold standard. Moreover, two tests are often applied simultaneously for the purpose of obtaining a 'joint' testing strategy that has either higher overall sensitivity or specificity than either of the two tests considered singly. Sequential versions of such strategies are often applied in order to reduce the cost of testing. We thus discuss joint (simultaneous and sequential) testing strategies and inference for them. Using the developed methods, we analyse two real and one simulated data sets, and we compare 'hypergeometric' and 'binomial-based' inferences. Our findings indicate that the posterior standard deviations for prevalence (but not sensitivity and specificity) based on finite population sampling tend to be smaller than their counterparts for infinite population sampling. Finally, we make recommendations about how small the sample size should be relative to the population size to warrant use of the binomial model for prevalence estimation. Copyright 2004 John Wiley & Sons, Ltd.

  20. Estimation After a Group Sequential Trial.

    PubMed

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.

  1. Imaging inert fluorinated gases in cracks: perhaps in David's ankles.

    PubMed

    Kuethe, Dean O; Scholz, Markus D; Fantazzini, Paola

    2007-05-01

    Inspired by the challenge of determining the nature of cracks on the ankles of Michelangelo's statue David, we discovered that one can image SF(6) gas in cracks in marble samples with alacrity. The imaging method produces images of gas with a signal-to-noise ratio (SNR) of 100-250, which is very high for magnetic resonance imaging (MRI) in general, let alone for an image of a gas at thermal equilibrium polarization. To put this unusual SNR in better perspective, we imaged SF(6) in a crack in a marble sample and imaged the lung tissue of a live rat (a more familiar variety of sample to many MRI scientists) using the same pulse sequence, the same size coils and the same MRI system. In both cases, we try to image subvoxel thin sheets of material that should appear bright against a darker background. By choosing imaging parameters appropriate for the different relaxation properties of SF(6) gas versus lung tissue and by choosing voxel sizes appropriate for the different goals of detecting subvoxel cracks on marble versus resolving subvoxel thin sheets of tissue, the SNR for voxels full of material was 220 and 14 for marble and lung, respectively. A major factor is that we chose large voxels to optimize SNR for detecting small cracks and we chose small voxels for resolving lung features at the expense of SNR. Imaging physics will cooperate to provide detection of small cracks on marble, but David's size poses a challenge for magnet designers. For the modest goal of imaging cracks in the left ankle, we desire a magnet with an approximately 32-cm gap and a flux density of approximately 0.36 T that weighs <500 kg.

  2. Inferred Paternity and Male Reproductive Success in a Killer Whale (Orcinus orca) Population.

    PubMed

    Ford, Michael J; Hanson, M Bradley; Hempelmann, Jennifer A; Ayres, Katherine L; Emmons, Candice K; Schorr, Gregory S; Baird, Robin W; Balcomb, Kenneth C; Wasser, Samuel K; Parsons, Kim M; Balcomb-Bartok, Kelly

    2011-01-01

    We used data from 78 individuals at 26 microsatellite loci to infer parental and sibling relationships within a community of fish-eating ("resident") eastern North Pacific killer whales (Orcinus orca). Paternity analysis involving 15 mother/calf pairs and 8 potential fathers and whole-pedigree analysis of the entire sample produced consistent results. The variance in male reproductive success was greater than expected by chance and similar to that of other aquatic mammals. Although the number of confirmed paternities was small, reproductive success appeared to increase with male age and size. We found no evidence that males from outside this small population sired any of the sampled individuals. In contrast to previous results in a different population, many offspring were the result of matings within the same "pod" (long-term social group). Despite this pattern of breeding within social groups, we found no evidence of offspring produced by matings between close relatives, and the average internal relatedness of individuals was significantly less than expected if mating were random. The population's estimated effective size was <30 or about 1/3 of the current census size. Patterns of allele frequency variation were consistent with a population bottleneck.

  3. Characterization of microplastic and mesoplastic debris in sediments from Kamilo Beach and Kahuku Beach, Hawai'i.

    PubMed

    Young, Alan M; Elliott, James A

    2016-12-15

    Sediment samples were collected from two Hawai'ian beaches, Kahuku Beach on O'ahu and Kamilo Beach on the Big Island of Hawai'i. A total of 48,988 large microplastic and small mesoplastic (0.5-8mm) particles were handpicked from the samples and sorted into four size classes (0.5-1mm, 1-2mm, 2-4mm, 4-8mm) and nine color categories. For all sizes combined the most common plastic fragment color was white/transparent (71.8%) followed by blue (8.5%), green (7.5%), black/grey (7.3%), red/pink (2.6%), yellow (1.2%), orange (0.6%), brown (0.3%) and purple (0.2%). Color frequency distribution based on both numbers and mass of particles was not significantly different among the various size classes nor between the two beaches. White and black/grey resin pellets accounted for 11.3% of the particles collected from Kahuku Beach and 4.2% of the particles from Kamilo Beach. Plastic type based on Raman Spectrometer analysis of a small representative subsample indicated that most of the fragments were polyethylene and a few were polypropylene. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Magnetic properties of atmospheric PMx in a small settlement during heating and non-heating season

    NASA Astrophysics Data System (ADS)

    Petrovsky, E.; Kotlik, B.; Zboril, R.; Kapicka, A.; Grison, H.

    2012-04-01

    Magnetic properties of environmental samples can serve as fast and relatively cheap proxy method to investigate occurrence of iron oxides. These methods are very sensitive in detecting strongly magnetic compounds such as magnetite and maghemite and can reveal concentration and assess grain-size distribution of these minerals. This information can be significant in estimating e.g. the source of pollutants, monitoring pollution load, or investigating seasonal and climatic effects. We studied magnetic properties of PM1, PM2.5 and PM10, collected over 32-48 hours in a small settlement in south Bohemia during heating and non-heating season. The site is rather remote, with negligible traffic and industrial contributions to air pollution. Thus, the suggested seasonal effect should be dominantly due to local (domestic) heating, burning wood or coal. In our contribution we show typical differences in PMx concentration, which is much higher in the winter (heating) sample, accompanied by SEM analyses and magnetic data oriented on concentration and grain-size distribution of magnetite/maghemite particles. While concentration of Fe-oxides does not vary that much, significant seasonal differences were observed in composition and grain-size distribution, reflecting different sources of the dust particles.

  5. Incorporating partially identified sample segments into acreage estimation procedures: Estimates using only observations from the current year

    NASA Technical Reports Server (NTRS)

    Sielken, R. L., Jr. (Principal Investigator)

    1981-01-01

    Several methods of estimating individual crop acreages using a mixture of completely identified and partially identified (generic) segments from a single growing year are derived and discussed. A small Monte Carlo study of eight estimators is presented. The relative empirical behavior of these estimators is discussed as are the effects of segment sample size and amount of partial identification. The principle recommendations are (1) to not exclude, but rather incorporate partially identified sample segments into the estimation procedure, (2) try to avoid having a large percentage (say 80%) of only partially identified segments, in the sample, and (3) use the maximum likelihood estimator although the weighted least squares estimator and least squares ratio estimator both perform almost as well. Sets of spring small grains (North Dakota) data were used.

  6. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    NASA Astrophysics Data System (ADS)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  7. Effect of growth ring orientation and placement of earlywood and latewood on MOE and MOR of very-small clear Douglas-fir beams.

    Treesearch

    Amy T. Grotta; Robert J. Leichti; Barbara L. Gartner; G.R. Johnson

    2005-01-01

    ASTM standard sizes for bending tests (either 50 x 50 mm or 25 x 25 mm in cross-section) are not always suitable for research purposes that characterize smaller sections of wood. Moreover, the ASTM standards specify loading the sample on the longitudinal-tangential surface. If specimens are small enough, then the effects of both growth-ring orientation and whether...

  8. A simple method for the analysis of particle sizes of forage and total mixed rations.

    PubMed

    Lammers, B P; Buckmaster, D R; Heinrichs, A J

    1996-05-01

    A simple separator was developed to determine the particle sizes of forage and TMR that allows for easy separation of wet forage into three fractions and also allows plotting of the particle size distribution. The device was designed to mimic the laboratory-scale separator for forage particle sizes that was specified by Standard S424 of the American Society of Agricultural Engineers. A comparison of results using the standard device and the newly developed separator indicated no difference in ability to predict fractions of particles with maximum length of less than 8 and 19 mm. The separator requires a small quantity of sample (1.4 L) and is manually operated. The materials on the screens and bottom pan were weighed to obtain the cumulative percentage of sample that was undersize for the two fractions. The results were then plotted using the Weibull distribution, which proved to be the best fit for the data. Convenience samples of haycrop silage, corn silage, and TMR from farms in the northeastern US were analyzed using the forage and TMR separator, and the range of observed values are given.

  9. Recommendations for choosing an analysis method that controls Type I error for unbalanced cluster sample designs with Gaussian outcomes.

    PubMed

    Johnson, Jacqueline L; Kreidler, Sarah M; Catellier, Diane J; Murray, David M; Muller, Keith E; Glueck, Deborah H

    2015-11-30

    We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Comparing the accuracy and precision of three techniques used for estimating missing landmarks when reconstructing fossil hominin crania.

    PubMed

    Neeser, Rudolph; Ackermann, Rebecca Rogers; Gain, James

    2009-09-01

    Various methodological approaches have been used for reconstructing fossil hominin remains in order to increase sample sizes and to better understand morphological variation. Among these, morphometric quantitative techniques for reconstruction are increasingly common. Here we compare the accuracy of three approaches--mean substitution, thin plate splines, and multiple linear regression--for estimating missing landmarks of damaged fossil specimens. Comparisons are made varying the number of missing landmarks, sample sizes, and the reference species of the population used to perform the estimation. The testing is performed on landmark data from individuals of Homo sapiens, Pan troglodytes and Gorilla gorilla, and nine hominin fossil specimens. Results suggest that when a small, same-species fossil reference sample is available to guide reconstructions, thin plate spline approaches perform best. However, if no such sample is available (or if the species of the damaged individual is uncertain), estimates of missing morphology based on a single individual (or even a small sample) of close taxonomic affinity are less accurate than those based on a large sample of individuals drawn from more distantly related extant populations using a technique (such as a regression method) able to leverage the information (e.g., variation/covariation patterning) contained in this large sample. Thin plate splines also show an unexpectedly large amount of error in estimating landmarks, especially over large areas. Recommendations are made for estimating missing landmarks under various scenarios. Copyright 2009 Wiley-Liss, Inc.

  11. Comparison of Bootstrapping and Markov Chain Monte Carlo for Copula Analysis of Hydrological Droughts

    NASA Astrophysics Data System (ADS)

    Yang, P.; Ng, T. L.; Yang, W.

    2015-12-01

    Effective water resources management depends on the reliable estimation of the uncertainty of drought events. Confidence intervals (CIs) are commonly applied to quantify this uncertainty. A CI seeks to be at the minimal length necessary to cover the true value of the estimated variable with the desired probability. In drought analysis where two or more variables (e.g., duration and severity) are often used to describe a drought, copulas have been found suitable for representing the joint probability behavior of these variables. However, the comprehensive assessment of the parameter uncertainties of copulas of droughts has been largely ignored, and the few studies that have recognized this issue have not explicitly compared the various methods to produce the best CIs. Thus, the objective of this study to compare the CIs generated using two widely applied uncertainty estimation methods, bootstrapping and Markov Chain Monte Carlo (MCMC). To achieve this objective, (1) the marginal distributions lognormal, Gamma, and Generalized Extreme Value, and the copula functions Clayton, Frank, and Plackett are selected to construct joint probability functions of two drought related variables. (2) The resulting joint functions are then fitted to 200 sets of simulated realizations of drought events with known distribution and extreme parameters and (3) from there, using bootstrapping and MCMC, CIs of the parameters are generated and compared. The effect of an informative prior on the CIs generated by MCMC is also evaluated. CIs are produced for different sample sizes (50, 100, and 200) of the simulated drought events for fitting the joint probability functions. Preliminary results assuming lognormal marginal distributions and the Clayton copula function suggest that for cases with small or medium sample sizes (~50-100), MCMC to be superior method if an informative prior exists. Where an informative prior is unavailable, for small sample sizes (~50), both bootstrapping and MCMC yield the same level of performance, and for medium sample sizes (~100), bootstrapping is better. For cases with a large sample size (~200), there is little difference between the CIs generated using bootstrapping and MCMC regardless of whether or not an informative prior exists.

  12. Are There Scenarios When the Use of Non-Placebo-Control Groups in Experimental Trial Designs Increase Expected Value to Society?

    PubMed

    Uyei, Jennifer; Braithwaite, R Scott

    2016-01-01

    Despite the benefits of the placebo-controlled trial design, it is limited by its inability to quantify total benefits and harms. Such trials, for example, are not designed to detect an intervention's placebo or nocebo effects, which if detected could alter the benefit-to-harm balance and change a decision to adopt or reject an intervention. In this article, we explore scenarios in which alternative experimental trial designs, which differ in the type of control used, influence expected value across a range of pretest assumptions and study sample sizes. We developed a decision model to compare 3 trial designs and their implications for decision making: 2-arm placebo-controlled trial ("placebo-control"), 2-arm intervention v. do nothing trial ("null-control"), and an innovative 3-arm trial design: intervention v. do nothing v. placebo trial ("novel design"). Four scenarios were explored regarding particular attributes of a hypothetical intervention: 1) all benefits and no harm, 2) no biological effect, 3) only biological effects, and 4) surreptitious harm (no biological benefit or nocebo effect). Scenario 1: When sample sizes were very small, the null-control was preferred, but as sample sizes increased, expected value of all 3 designs converged. Scenario 2: The null-control was preferred regardless of sample size when the ratio of placebo to nocebo effect was >1; otherwise, the placebo-control was preferred. Scenario 3: When sample size was very small, the placebo-control was preferred when benefits outweighed harms, but the novel design was preferred when harms outweighed benefits. Scenario 4: The placebo-control was preferred when harms outweighed placebo benefits; otherwise, preference went to the null-control. Scenarios are hypothetical, study designs have not been tested in a real-world setting, blinding is not possible in all designs, and some may argue the novel design poses ethical concerns. We identified scenarios in which alternative experimental study designs would confer greater expected value than the placebo-controlled trial design. The likelihood and prevalence of such situations warrant further study. © The Author(s) 2015.

  13. Reproductive strategies and seasonal changes in the somatic indices of seven small-bodied fishes in Atlantic Canada in relation to study design for environmental effects monitoring.

    PubMed

    Barrett, Timothy J; Brasfield, Sandra M; Carroll, Leslie C; Doyle, Meghan A; van den Heuvel, Michael R; Munkittrick, Kelly R

    2015-05-01

    Small-bodied fishes are more commonly being used in environmental effects monitoring (EEM) studies. There is a lack of understanding of the biological characteristics of many small-bodied species, which hinders study designs for monitoring studies. For example, 72% of fish population surveys in Canada's EEM program for pulp and paper mills that used small-bodied fishes were conducted outside of the reproductive period of the species. This resulted in an inadequate assessment of the EEM program's primary effect endpoint (reproduction) for these studies. The present study examined seasonal changes in liver size, gonad size, and condition in seven freshwater and estuarine small-bodied fishes in Atlantic Canada. These data were used to examine differences in reproductive strategies and patterns of energy storage among species. Female gonadal recrudescence in all seven species began primarily in the 2-month period in the spring before spawning. Male gonadal development was concurrent with females in five species; however, gonadal recrudescence began in the fall in male three-spined stickleback (Gasterosteus aculeatus) and slimy sculpin (Cottus cognatus). The spawning period for each species was estimated from the decline in relative ovary size after its seasonal maximum value in spring. The duration of the spawning period reflected the reproductive strategy (single vs multiple spawning) of the species. Optimal sampling periods to assess reproductive impacts in each species were determined based on seasonal changes in ovary size and were identified to be during the prespawning period when gonads are developing and variability in relative gonad size is at a minimum.

  14. Small sample mediation testing: misplaced confidence in bootstrapped confidence intervals.

    PubMed

    Koopman, Joel; Howe, Michael; Hollenbeck, John R; Sin, Hock-Peng

    2015-01-01

    Bootstrapping is an analytical tool commonly used in psychology to test the statistical significance of the indirect effect in mediation models. Bootstrapping proponents have particularly advocated for its use for samples of 20-80 cases. This advocacy has been heeded, especially in the Journal of Applied Psychology, as researchers are increasingly utilizing bootstrapping to test mediation with samples in this range. We discuss reasons to be concerned with this escalation, and in a simulation study focused specifically on this range of sample sizes, we demonstrate not only that bootstrapping has insufficient statistical power to provide a rigorous hypothesis test in most conditions but also that bootstrapping has a tendency to exhibit an inflated Type I error rate. We then extend our simulations to investigate an alternative empirical resampling method as well as a Bayesian approach and demonstrate that they exhibit comparable statistical power to bootstrapping in small samples without the associated inflated Type I error. Implications for researchers testing mediation hypotheses in small samples are presented. For researchers wishing to use these methods in their own research, we have provided R syntax in the online supplemental materials. (c) 2015 APA, all rights reserved.

  15. Nanoliter hemolymph sampling and analysis of individual adult Drosophila melanogaster.

    PubMed

    Piyankarage, Sujeewa C; Featherstone, David E; Shippy, Scott A

    2012-05-15

    The fruit fly (Drosophila melanogaster) is an extensively used and powerful, genetic model organism. However, chemical studies using individual flies have been limited by the animal's small size. Introduced here is a method to sample nanoliter hemolymph volumes from individual adult fruit-flies for chemical analysis. The technique results in an ability to distinguish hemolymph chemical variations with developmental stage, fly sex, and sampling conditions. Also presented is the means for two-point monitoring of hemolymph composition for individual flies.

  16. 14CO2 analysis of soil gas: Evaluation of sample size limits and sampling devices

    NASA Astrophysics Data System (ADS)

    Wotte, Anja; Wischhöfer, Philipp; Wacker, Lukas; Rethemeyer, Janet

    2017-12-01

    Radiocarbon (14C) analysis of CO2 respired from soils or sediments is a valuable tool to identify different carbon sources. The collection and processing of the CO2, however, is challenging and prone to contamination. We thus continuously improve our handling procedures and present a refined method for the collection of even small amounts of CO2 in molecular sieve cartridges (MSCs) for accelerator mass spectrometry 14C analysis. Using a modified vacuum rig and an improved desorption procedure, we were able to increase the CO2 recovery from the MSC (95%) as well as the sample throughput compared to our previous study. By processing series of different sample size, we show that our MSCs can be used for CO2 samples of as small as 50 μg C. The contamination by exogenous carbon determined in these laboratory tests, was less than 2.0 μg C from fossil and less than 3.0 μg C from modern sources. Additionally, we tested two sampling devices for the collection of CO2 samples released from soils or sediments, including a respiration chamber and a depth sampler, which are connected to the MSC. We obtained a very promising, low process blank for the entire CO2 sampling and purification procedure of ∼0.004 F14C (equal to 44,000 yrs BP) and ∼0.003 F14C (equal to 47,000 yrs BP). In contrast to previous studies, we observed no isotopic fractionation towards lighter δ13C values during the passive sampling with the depth samplers.

  17. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.

    PubMed

    Lin, Johnny; Bentler, Peter M

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

  18. Handling limited datasets with neural networks in medical applications: A small-data approach.

    PubMed

    Shaikhina, Torgyn; Khovanova, Natalia A

    2017-01-01

    Single-centre studies in medical domain are often characterised by limited samples due to the complexity and high costs of patient data collection. Machine learning methods for regression modelling of small datasets (less than 10 observations per predictor variable) remain scarce. Our work bridges this gap by developing a novel framework for application of artificial neural networks (NNs) for regression tasks involving small medical datasets. In order to address the sporadic fluctuations and validation issues that appear in regression NNs trained on small datasets, the method of multiple runs and surrogate data analysis were proposed in this work. The approach was compared to the state-of-the-art ensemble NNs; the effect of dataset size on NN performance was also investigated. The proposed framework was applied for the prediction of compressive strength (CS) of femoral trabecular bone in patients suffering from severe osteoarthritis. The NN model was able to estimate the CS of osteoarthritic trabecular bone from its structural and biological properties with a standard error of 0.85MPa. When evaluated on independent test samples, the NN achieved accuracy of 98.3%, outperforming an ensemble NN model by 11%. We reproduce this result on CS data of another porous solid (concrete) and demonstrate that the proposed framework allows for an NN modelled with as few as 56 samples to generalise on 300 independent test samples with 86.5% accuracy, which is comparable to the performance of an NN developed with 18 times larger dataset (1030 samples). The significance of this work is two-fold: the practical application allows for non-destructive prediction of bone fracture risk, while the novel methodology extends beyond the task considered in this study and provides a general framework for application of regression NNs to medical problems characterised by limited dataset sizes. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  19. High Prevalence of Anaplasma spp. in Small Ruminants in Morocco.

    PubMed

    Ait Lbacha, H; Alali, S; Zouagui, Z; El Mamoun, L; Rhalem, A; Petit, E; Haddad, N; Gandoin, C; Boulouis, H-J; Maillard, R

    2017-02-01

    The prevalence of infection by Anaplasma spp. (including Anaplasma phagocytophilum) was determined using blood smear microscopy and PCR through screening of small ruminant blood samples collected from seven regions of Morocco. Co-infections of Anaplasma spp., Babesia spp, Theileria spp. and Mycoplasma spp. were investigated and risk factors for Anaplasma spp. infection assessed. A total of 422 small ruminant blood samples were randomly collected from 70 flocks. Individual animal (breed, age, tick burden and previous treatment) and flock data (GPS coordinate of farm, size of flock and livestock production system) were collected. Upon examination of blood smears, 375 blood samples (88.9%) were found to contain Anaplasma-like erythrocytic inclusion bodies. Upon screening with a large spectrum PCR targeting the Anaplasma 16S rRNA region, 303 (71%) samples were found to be positive. All 303 samples screened with the A. phagocytophilum-specific PCR, which targets the msp2 region, were found to be negative. Differences in prevalence were found to be statistically significant with regard to region, altitude, flock size, livestock production system, grazing system, presence of clinical cases and application of tick and tick-borne diseases prophylactic measures. Kappa analysis revealed a poor concordance between microscopy and PCR (k = 0.14). Agreement with PCR is improved by considering microscopy and packed cell volume (PCV) in parallel. The prevalence of double infections was found to be 1.7, 2.5 and 24% for Anaplasma-Babesia, Anaplasma-Mycoplasma and Anaplasma-Theileria, respectively. Co-infection with three or more haemoparasites was found in 1.6% of animals examined. In conclusion, we demonstrate the high burden of anaplasmosis in small ruminants in Morocco and the high prevalence of co-infections of tick-borne diseases. There is an urgent need to improve the control of this neglected group of diseases. © 2015 Blackwell Verlag GmbH.

  20. Assessing grain-size correspondence between flow and deposits of controlled floods in the Colorado River, USA

    USGS Publications Warehouse

    Draut, Amy; Rubin, David M.

    2013-01-01

    Flood-deposited sediment has been used to decipher environmental parameters such as variability in watershed sediment supply, paleoflood hydrology, and channel morphology. It is not well known, however, how accurately the deposits reflect sedimentary processes within the flow, and hence what sampling intensity is needed to decipher records of recent or long-past conditions. We examine these problems using deposits from dam-regulated floods in the Colorado River corridor through Marble Canyon–Grand Canyon, Arizona, U.S.A., in which steady-peaked floods represent a simple end-member case. For these simple floods, most deposits show inverse grading that reflects coarsening suspended sediment (a result of fine-sediment-supply limitation), but there is enough eddy-scale variability that some profiles show normal grading that did not reflect grain-size evolution in the flow as a whole. To infer systemwide grain-size evolution in modern or ancient depositional systems requires sampling enough deposit profiles that the standard error of the mean of grain-size-change measurements becomes small relative to the magnitude of observed changes. For simple, steady-peaked floods, 5–10 profiles or fewer may suffice to characterize grain-size trends robustly, but many more samples may be needed from deposits with greater variability in their grain-size evolution.

  1. Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature

    PubMed Central

    Szucs, Denes; Ioannidis, John P. A.

    2017-01-01

    We have empirically assessed the distribution of published effect sizes and estimated power by analyzing 26,841 statistical records from 3,801 cognitive neuroscience and psychology papers published recently. The reported median effect size was D = 0.93 (interquartile range: 0.64–1.46) for nominally statistically significant results and D = 0.24 (0.11–0.42) for nonsignificant results. Median power to detect small, medium, and large effects was 0.12, 0.44, and 0.73, reflecting no improvement through the past half-century. This is so because sample sizes have remained small. Assuming similar true effect sizes in both disciplines, power was lower in cognitive neuroscience than in psychology. Journal impact factors negatively correlated with power. Assuming a realistic range of prior probabilities for null hypotheses, false report probability is likely to exceed 50% for the whole literature. In light of our findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience. PMID:28253258

  2. Integrative Analysis of Cancer Diagnosis Studies with Composite Penalization

    PubMed Central

    Liu, Jin; Huang, Jian; Ma, Shuangge

    2013-01-01

    Summary In cancer diagnosis studies, high-throughput gene profiling has been extensively conducted, searching for genes whose expressions may serve as markers. Data generated from such studies have the “large d, small n” feature, with the number of genes profiled much larger than the sample size. Penalization has been extensively adopted for simultaneous estimation and marker selection. Because of small sample sizes, markers identified from the analysis of single datasets can be unsatisfactory. A cost-effective remedy is to conduct integrative analysis of multiple heterogeneous datasets. In this article, we investigate composite penalization methods for estimation and marker selection in integrative analysis. The proposed methods use the minimax concave penalty (MCP) as the outer penalty. Under the homogeneity model, the ridge penalty is adopted as the inner penalty. Under the heterogeneity model, the Lasso penalty and MCP are adopted as the inner penalty. Effective computational algorithms based on coordinate descent are developed. Numerical studies, including simulation and analysis of practical cancer datasets, show satisfactory performance of the proposed methods. PMID:24578589

  3. Globally maximizing, locally minimizing: unsupervised discriminant projection with applications to face and palm biometrics.

    PubMed

    Yang, Jian; Zhang, David; Yang, Jing-Yu; Niu, Ben

    2007-04-01

    This paper develops an unsupervised discriminant projection (UDP) technique for dimensionality reduction of high-dimensional data in small sample size cases. UDP can be seen as a linear approximation of a multimanifolds-based learning framework which takes into account both the local and nonlocal quantities. UDP characterizes the local scatter as well as the nonlocal scatter, seeking to find a projection that simultaneously maximizes the nonlocal scatter and minimizes the local scatter. This characteristic makes UDP more intuitive and more powerful than the most up-to-date method, Locality Preserving Projection (LPP), which considers only the local scatter for clustering or classification tasks. The proposed method is applied to face and palm biometrics and is examined using the Yale, FERET, and AR face image databases and the PolyU palmprint database. The experimental results show that UDP consistently outperforms LPP and PCA and outperforms LDA when the training sample size per class is small. This demonstrates that UDP is a good choice for real-world biometrics applications.

  4. Influence of pH-control in phosphoric acid treatment of titanium oxide and their powder properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onoda, Hiroaki, E-mail: onoda@kpu.ac.jp; Matsukura, Aki

    Highlights: • The photocatalytic activity was suppressed by phosphoric acid treatment. • The obtained pigment had small particles with sub-micrometer size. • By phosphoric acid treatment, the smoothness of samples improved. - Abstract: Titanium oxide that has the photocatalytic activity is used as a white pigment for cosmetics. A certain degree of sebum on the skin is decomposed by the ultraviolet radiation in sunlight. In this work, titanium oxide was shaken with phosphoric acid at various pH to synthesize a novel white pigment for cosmetics. Their chemical composition, powder properties, photocatalytic activity, color phase, and smoothness were studied. The obtainedmore » materials indicated XRD peaks of titanium oxide, however, these peak intensity became weak by phosphoric acid treatment. These samples without heating and heated at 100 °C included the small particles with sub-micrometer size. The photocatalytic activity of the obtained powders became weak by phosphoric acid treatment at pH 4 and 5 to protect the sebum on the skin.« less

  5. Observed oil and gas field size distributions: A consequence of the discovery process and prices of oil and gas

    USGS Publications Warehouse

    Drew, L.J.; Attanasi, E.D.; Schuenemeyer, J.H.

    1988-01-01

    If observed oil and gas field size distributions are obtained by random samplings, the fitted distributions should approximate that of the parent population of oil and gas fields. However, empirical evidence strongly suggests that larger fields tend to be discovered earlier in the discovery process than they would be by random sampling. Economic factors also can limit the number of small fields that are developed and reported. This paper examines observed size distributions in state and federal waters of offshore Texas. Results of the analysis demonstrate how the shape of the observable size distributions change with significant hydrocarbon price changes. Comparison of state and federal observed size distributions in the offshore area shows how production cost differences also affect the shape of the observed size distribution. Methods for modifying the discovery rate estimation procedures when economic factors significantly affect the discovery sequence are presented. A primary conclusion of the analysis is that, because hydrocarbon price changes can significantly affect the observed discovery size distribution, one should not be confident about inferring the form and specific parameters of the parent field size distribution from the observed distributions. ?? 1988 International Association for Mathematical Geology.

  6. Does Graft Particle Type and Size Affect Ridge Dimensional Changes After Alveolar Ridge Split Procedure?

    PubMed

    Kheur, Mohit G; Kheur, Supriya; Lakha, Tabrez; Jambhekar, Shantanu; Le, Bach; Jain, Vinay

    2018-04-01

    The absence of an adequate volume of bone at implant sites requires augmentation procedures before the placement of implants. The aim of the present study was to assess the ridge width gain with the use of allografts and biphasic β-tricalcium phosphate with hydroxyapatite (alloplast) in ridge split procedures, when each were used in small (0.25 to 1 mm) and large (1 to 2 mm) particle sizes. A randomized controlled trial of 23 subjects with severe atrophy of the mandible in the horizontal dimension was conducted in a private institute. The patients underwent placement of 49 dental implants after a staged ridge split procedure. The patients were randomly allocated to alloplast and allograft groups (predictor variable). In each group, the patients were randomly assigned to either small graft particle or large graft particle size (predictor variable). The gain in ridge width (outcome variable) was assessed before implant placement. A 2-way analysis of variance test and the Student unpaired t test were used for evaluation of the ridge width gain between the allograft and alloplast groups (predictor variable). Differences were considered significant if P values were < .05. The sample included 23 patients (14 men and 9 women). The patients were randomly allocated to the alloplast (n = 11) or allograft (n = 12) group before the ridge split procedure. In each group, they were assigned to a small graft particle or large graft particle size (alloplast group, small particle in 5 and large particle size in 6 patients; allograft group, small particle in 6 and large particle size in 6). A statistically significant difference was observed between the 2 graft types. The average ridge width gain was significantly greater in the alloplast group (large, 4.40 ± 0.24 mm; small, 3.52 ± 0.59 mm) than in the allograft group (large, 3.82 ± 0.19 mm; small, 2.57 ± 0.16 mm). For both graft types (alloplast and allograft), the large particle size graft resulted in a greater ridge width gain compared with the small particle size graft (P < .05). Within the limitations of the present study, we suggest the use of large particle alloplast as the graft material of choice for staged ridge split procedures in the posterior mandible. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  7. Explanation of Two Anomalous Results in Statistical Mediation Analysis.

    PubMed

    Fritz, Matthew S; Taylor, Aaron B; Mackinnon, David P

    2012-01-01

    Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special concern as the bias-corrected bootstrap is often recommended and used due to its higher statistical power compared with other tests. The second result is statistical power reaching an asymptote far below 1.0 and in some conditions even declining slightly as the size of the relationship between X and M , a , increased. Two computer simulations were conducted to examine these findings in greater detail. Results from the first simulation found that the increased Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap are a function of an interaction between the size of the individual paths making up the mediated effect and the sample size, such that elevated Type I error rates occur when the sample size is small and the effect size of the nonzero path is medium or larger. Results from the second simulation found that stagnation and decreases in statistical power as a function of the effect size of the a path occurred primarily when the path between M and Y , b , was small. Two empirical mediation examples are provided using data from a steroid prevention and health promotion program aimed at high school football players (Athletes Training and Learning to Avoid Steroids; Goldberg et al., 1996), one to illustrate a possible Type I error for the bias-corrected bootstrap test and a second to illustrate a loss in power related to the size of a . Implications of these findings are discussed.

  8. Probing defects in chemically synthesized ZnO nanostrucures by positron annihilation and photoluminescence spectroscopy

    NASA Astrophysics Data System (ADS)

    Chaudhuri, S. K.; Ghosh, Manoranjan; Das, D.; Raychaudhuri, A. K.

    2010-09-01

    The present article describes the size induced changes in the structural arrangement of intrinsic defects present in chemically synthesized ZnO nanoparticles of various sizes. Routine x-ray diffraction and transmission electron microscopy have been performed to determine the shapes and sizes of the nanocrystalline ZnO samples. Detailed studies using positron annihilation spectroscopy reveals the presence of zinc vacancy. Whereas analysis of photoluminescence results predict the signature of charged oxygen vacancies. The size induced changes in positron parameters as well as the photoluminescence properties, has shown contrasting or nonmonotonous trends as size varies from 4 to 85 nm. Small spherical particles below a critical size (˜23 nm) receive more positive surface charge due to the higher occupancy of the doubly charge oxygen vacancy as compared to the bigger nanostructures where singly charged oxygen vacancy predominates. This electronic alteration has been seen to trigger yet another interesting phenomenon, described as positron confinement inside nanoparticles. Finally, based on all the results, a model of the structural arrangement of the intrinsic defects in the present samples has been reconciled.

  9. Evidence for plant-derived xenomiRs based on a large-scale analysis of public small RNA sequencing data from human samples.

    PubMed

    Zhao, Qi; Liu, Yuanning; Zhang, Ning; Hu, Menghan; Zhang, Hao; Joshi, Trupti; Xu, Dong

    2018-01-01

    In recent years, an increasing number of studies have reported the presence of plant miRNAs in human samples, which resulted in a hypothesis asserting the existence of plant-derived exogenous microRNA (xenomiR). However, this hypothesis is not widely accepted in the scientific community due to possible sample contamination and the small sample size with lack of rigorous statistical analysis. This study provides a systematic statistical test that can validate (or invalidate) the plant-derived xenomiR hypothesis by analyzing 388 small RNA sequencing data from human samples in 11 types of body fluids/tissues. A total of 166 types of plant miRNAs were found in at least one human sample, of which 14 plant miRNAs represented more than 80% of the total plant miRNAs abundance in human samples. Plant miRNA profiles were characterized to be tissue-specific in different human samples. Meanwhile, the plant miRNAs identified from microbiome have an insignificant abundance compared to those from humans, while plant miRNA profiles in human samples were significantly different from those in plants, suggesting that sample contamination is an unlikely reason for all the plant miRNAs detected in human samples. This study also provides a set of testable synthetic miRNAs with isotopes that can be detected in situ after being fed to animals.

  10. Characterization of a Hybrid Optical Microscopy/Laser Ablation Liquid Vortex Capture/Electrospray Ionization System for Mass Spectrometry Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cahill, John F.; Kertesz, Vilmos; Van Berkel, Gary J.

    Herein, a commercial optical microscope, laser microdissection instrument was coupled with an electrospray ionization mass spectrometer via a low profile liquid vortex capture probe to yield a hybrid optical microscopy/mass spectrometry imaging system. The instrument has bright-field and fluorescence microscopy capabilities in addition to a highly focused UV laser beam that is utilized for laser ablation of samples. With this system, material laser ablated from a sample using the microscope was caught by a liquid vortex capture probe and transported in solution for analysis by electrospray ionization mass spectrometry. Both lane scanning and spot sampling mass spectral imaging modes weremore » used. The smallest area the system was able to ablate was ~0.544 μm × ~0.544 μm, achieved by oversampling of the smallest laser ablation spot size that could be obtained (~1.9 μm). With use of a model photoresist surface, known features as small as ~1.5 μm were resolved. The capabilities of the system with real world samples were demonstrated first with a blended polymer thin film containing poly(2-vinylpyridine) and poly(N-vinylcarbazole). Using spot sampling imaging, sub-micrometer sized features (0.62, 0.86, and 0.98 μm) visible by optical microscopy were clearly distinguished in the mass spectral images. A second real world example showed the imaging of trace amounts of cocaine in mouse brain thin tissue sections. Lastly, with use of a lane scanning mode with ~6 μm × ~6 μm data pixels, features in the tissue as small as 15 μm in size could be distinguished in both the mass spectral and optical images.« less

  11. Characterization of a Hybrid Optical Microscopy/Laser Ablation Liquid Vortex Capture/Electrospray Ionization System for Mass Spectrometry Imaging

    DOE PAGES

    Cahill, John F.; Kertesz, Vilmos; Van Berkel, Gary J.

    2015-10-22

    Herein, a commercial optical microscope, laser microdissection instrument was coupled with an electrospray ionization mass spectrometer via a low profile liquid vortex capture probe to yield a hybrid optical microscopy/mass spectrometry imaging system. The instrument has bright-field and fluorescence microscopy capabilities in addition to a highly focused UV laser beam that is utilized for laser ablation of samples. With this system, material laser ablated from a sample using the microscope was caught by a liquid vortex capture probe and transported in solution for analysis by electrospray ionization mass spectrometry. Both lane scanning and spot sampling mass spectral imaging modes weremore » used. The smallest area the system was able to ablate was ~0.544 μm × ~0.544 μm, achieved by oversampling of the smallest laser ablation spot size that could be obtained (~1.9 μm). With use of a model photoresist surface, known features as small as ~1.5 μm were resolved. The capabilities of the system with real world samples were demonstrated first with a blended polymer thin film containing poly(2-vinylpyridine) and poly(N-vinylcarbazole). Using spot sampling imaging, sub-micrometer sized features (0.62, 0.86, and 0.98 μm) visible by optical microscopy were clearly distinguished in the mass spectral images. A second real world example showed the imaging of trace amounts of cocaine in mouse brain thin tissue sections. Lastly, with use of a lane scanning mode with ~6 μm × ~6 μm data pixels, features in the tissue as small as 15 μm in size could be distinguished in both the mass spectral and optical images.« less

  12. Scalable population estimates using spatial-stream-network (SSN) models, fish density surveys, and national geospatial database frameworks for streams

    Treesearch

    Daniel J. Isaak; Jay M. Ver Hoef; Erin E. Peterson; Dona L. Horan; David E. Nagel

    2017-01-01

    Population size estimates for stream fishes are important for conservation and management, but sampling costs limit the extent of most estimates to small portions of river networks that encompass 100s–10 000s of linear kilometres. However, the advent of large fish density data sets, spatial-stream-network (SSN) models that benefit from nonindependence among samples,...

  13. Indoor particle levels in small- and medium-sized commercial buildings in California.

    PubMed

    Wu, Xiangmei May; Apte, Michael G; Bennett, Deborah H

    2012-11-20

    This study monitored indoor and outdoor particle concentrations in 37 small and medium commercial buildings (SMCBs) in California with three buildings sampled on two occasions, resulting in 40 sampling days. Sampled buildings included offices, retail establishments, restaurants, dental offices, and hair salons, among others. Continuous measurements were made for both ultrafine and fine particulate matter as well as black carbon inside and outside of the building. Integrated PM(2.5), PM(2.5-10), and PM(10) samples were also collected inside and outside the building. The majority of the buildings had indoor/outdoor (I/O) particle concentration ratios less than 1.0, indicating that contributions from indoor sources are less than removal of outdoor particles. However, some of the buildings had I/O ratios greater than 1, indicating significant indoor particle sources. This was particularly true of restaurants, hair salons, and dental offices. The infiltration factor was estimated from a regression analysis of indoor and outdoor concentrations for each particle size fraction, finding lower values for ultrafine and coarse particles than for submicrometer particles, as expected. The I/O ratio of black carbon was used as a relative measure of the infiltration factor of particles among buildings, with a geometric mean of 0.62. The contribution of indoor sources to indoor particle levels was estimated for each building.

  14. Value of information methods to design a clinical trial in a small population to optimise a health economic utility function.

    PubMed

    Pearce, Michael; Hee, Siew Wan; Madan, Jason; Posch, Martin; Day, Simon; Miller, Frank; Zohar, Sarah; Stallard, Nigel

    2018-02-08

    Most confirmatory randomised controlled clinical trials (RCTs) are designed with specified power, usually 80% or 90%, for a hypothesis test conducted at a given significance level, usually 2.5% for a one-sided test. Approval of the experimental treatment by regulatory agencies is then based on the result of such a significance test with other information to balance the risk of adverse events against the benefit of the treatment to future patients. In the setting of a rare disease, recruiting sufficient patients to achieve conventional error rates for clinically reasonable effect sizes may be infeasible, suggesting that the decision-making process should reflect the size of the target population. We considered the use of a decision-theoretic value of information (VOI) method to obtain the optimal sample size and significance level for confirmatory RCTs in a range of settings. We assume the decision maker represents society. For simplicity we assume the primary endpoint to be normally distributed with unknown mean following some normal prior distribution representing information on the anticipated effectiveness of the therapy available before the trial. The method is illustrated by an application in an RCT in haemophilia A. We explicitly specify the utility in terms of improvement in primary outcome and compare this with the costs of treating patients, both financial and in terms of potential harm, during the trial and in the future. The optimal sample size for the clinical trial decreases as the size of the population decreases. For non-zero cost of treating future patients, either monetary or in terms of potential harmful effects, stronger evidence is required for approval as the population size increases, though this is not the case if the costs of treating future patients are ignored. Decision-theoretic VOI methods offer a flexible approach with both type I error rate and power (or equivalently trial sample size) depending on the size of the future population for whom the treatment under investigation is intended. This might be particularly suitable for small populations when there is considerable information about the patient population.

  15. Estimation of within-stratum variance for sample allocation: Foreign commodity production forecasting

    NASA Technical Reports Server (NTRS)

    Chhikara, R. S.; Perry, C. R., Jr. (Principal Investigator)

    1980-01-01

    The problem of determining the stratum variances required for an optimum sample allocation for remotely sensed crop surveys is investigated with emphasis on an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical statistics is developed for obtaining initial estimates of stratum variances. The procedure is applied to variance for wheat in the U.S. Great Plains and is evaluated based on the numerical results obtained. It is shown that the proposed technique is viable and performs satisfactorily with the use of a conservative value (smaller than the expected value) for the field size and with the use of crop statistics from the small political division level.

  16. Effective density and morphology of particles emitted from small-scale combustion of various wood fuels.

    PubMed

    Leskinen, Jani; Ihalainen, Mika; Torvela, Tiina; Kortelainen, Miika; Lamberg, Heikki; Tiitta, Petri; Jakobi, Gert; Grigonyte, Julija; Joutsensaari, Jorma; Sippula, Olli; Tissari, Jarkko; Virtanen, Annele; Zimmermann, Ralf; Jokiniemi, Jorma

    2014-11-18

    The effective density of fine particles emitted from small-scale wood combustion of various fuels were determined with a system consisting of an aerosol particle mass analyzer and a scanning mobility particle sizer (APM-SMPS). A novel sampling chamber was combined to the system to enable measurements of highly fluctuating combustion processes. In addition, mass-mobility exponents (relates mass and mobility size) were determined from the density data to describe the shape of the particles. Particle size, type of fuel, combustion phase, and combustion conditions were found to have an effect on the effective density and the particle shape. For example, steady combustion phase produced agglomerates with effective density of roughly 1 g cm(-3) for small particles, decreasing to 0.25 g cm(-3) for 400 nm particles. The effective density was higher for particles emitted from glowing embers phase (ca. 1-2 g cm(-3)), and a clear size dependency was not observed as the particles were nearly spherical in shape. This study shows that a single value cannot be used for the effective density of particles emitted from wood combustion.

  17. Non-Born-Oppenheimer self-consistent field calculations with cubic scaling

    NASA Astrophysics Data System (ADS)

    Moncada, Félix; Posada, Edwin; Flores-Moreno, Roberto; Reyes, Andrés

    2012-05-01

    An efficient nuclear molecular orbital methodology is presented. This approach combines an auxiliary density functional theory for electrons (ADFT) and a localized Hartree product (LHP) representation for the nuclear wave function. A series of test calculations conducted on small molecules exposed that energy and geometry errors introduced by the use of ADFT and LHP approximations are small and comparable to those obtained by the use of electronic ADFT. In addition, sample calculations performed on (HF)n chains disclosed that the combined ADFT/LHP approach scales cubically with system size (n) as opposed to the quartic scaling of Hartree-Fock/LHP or DFT/LHP methods. Even for medium size molecules the improved scaling of the ADFT/LHP approach resulted in speedups of at least 5x with respect to Hartree-Fock/LHP calculations. The ADFT/LHP method opens up the possibility of studying nuclear quantum effects on large size systems that otherwise would be impractical.

  18. Importance of size and distribution of Ni nanoparticles for the hydrodeoxygenation of microalgae oil.

    PubMed

    Song, Wenji; Zhao, Chen; Lercher, Johannes A

    2013-07-22

    Improved synthetic approaches for preparing small-sized Ni nanoparticles (d=3 nm) supported on HBEA zeolite have been explored and compared with the traditional impregnation method. The formation of surface nickel silicate/aluminate involved in the two precipitation processes are inferred to lead to the stronger interaction between the metal and the support. The lower Brønsted acid concentrations of these two Ni/HBEA catalysts compared with the parent zeolite caused by the partial exchange of Brønsted acid sites by Ni(2+) cations do not influence the hydrodeoxygenation rates, but alter the product selectivity. Higher initial rates and higher stability have been achieved with these optimized catalysts for the hydrodeoxygenation of stearic acid and microalgae oil. Small metal particles facilitate high initial catalytic activity in the fresh sample and size uniformity ensures high catalyst stability. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Closure and ratio correlation analysis of lunar chemical and grain size data

    NASA Technical Reports Server (NTRS)

    Butler, J. C.

    1976-01-01

    Major element and major element plus trace element analyses were selected from the lunar data base for Apollo 11, 12 and 15 basalt and regolith samples. Summary statistics for each of the six data sets were compiled, and the effects of closure on the Pearson product moment correlation coefficient were investigated using the Chayes and Kruskal approximation procedure. In general, there are two types of closure effects evident in these data sets: negative correlations of intermediate size which are solely the result of closure, and correlations of small absolute value which depart significantly from their expected closure correlations which are of intermediate size. It is shown that a positive closure correlation will arise only when the product of the coefficients of variation is very small (less than 0.01 for most data sets) and, in general, trace elements in the lunar data sets exhibit relatively large coefficients of variation.

  20. Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.

    PubMed

    Chen, Xiao; Lu, Bin; Yan, Chao-Gan

    2018-01-01

    Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  1. Microbiological performance of dairy processing plants is influenced by scale of production and the implemented food safety management system: a case study.

    PubMed

    Opiyo, Beatrice Atieno; Wangoh, John; Njage, Patrick Murigu Kamau

    2013-06-01

    The effects of existing food safety management systems and size of the production facility on microbiological quality in the dairy industry in Kenya were studied. A microbial assessment scheme was used to evaluate 14 dairies in Nairobi and its environs, and their performance was compared based on their size and on whether they were implementing hazard analysis critical control point (HACCP) systems and International Organization for Standardization (ISO) 22000 recommendations. Environmental samples from critical sampling locations, i.e., workers' hands and food contact surfaces, and from end products were analyzed for microbial quality, including hygiene indicators and pathogens. Microbial safety level profiles (MSLPs) were constructed from the microbiological data to obtain an overview of contamination. The maximum MSLP score for environmental samples was 18 (six microbiological parameters, each with a maximum MSLP score of 3) and that for end products was 15 (five microbiological parameters). Three dairies (two large scale and one medium scale; 21% of total) achieved the maximum MSLP scores of 18 for environmental samples and 15 for the end product. Escherichia coli was detected on food contact surfaces in three dairies, all of which were small scale dairies, and the microorganism was also present in end product samples from two of these dairies, an indication of cross-contamination. Microbial quality was poorest in small scale dairies. Most operations in these dairies were manual, with minimal system documentation. Noncompliance with hygienic practices such as hand washing and cleaning and disinfection procedures, which is common in small dairies, directly affects the microbial quality of the end products. Dairies implementing HACCP systems or ISO 22000 recommendations achieved maximum MSLP scores and hence produced safer products.

  2. Cracks and nanodroplets produced on tungsten surface samples by dense plasma jets

    NASA Astrophysics Data System (ADS)

    Ticoş, C. M.; Galaţanu, M.; Galaţanu, A.; Luculescu, C.; Scurtu, A.; Udrea, N.; Ticoş, D.; Dumitru, M.

    2018-03-01

    Small samples of 12.5 mm in diameter made from pure tungsten were exposed to a dense plasma jet produced by a coaxial plasma gun operated at 2 kJ. The surface of the samples was analyzed using a scanning electron microscope (SEM) before and after applying consecutive plasma shots. Cracks and craters were produced in the surface due to surface tensions during plasma heating. Nanodroplets and micron size droplets could be observed on the samples surface. An energy-dispersive spectroscopy (EDS) analysis revealed that the composition of these droplets coincided with that of the gun electrode material. Four types of samples were prepared by spark plasma sintering from powders with the average particle size ranging from 70 nanometers up to 80 μm. The plasma power load to the sample surface was estimated to be ≈4.7 MJ m-2 s-1/2 per shot. The electron temperature and density in the plasma jet had peak values 17 eV and 1.6 × 1022 m-3, respectively.

  3. Microcystin distribution in physical size class separations of natural plankton communities

    USGS Publications Warehouse

    Graham, J.L.; Jones, J.R.

    2007-01-01

    Phytoplankton communities in 30 northern Missouri and Iowa lakes were physically separated into 5 size classes (>100 ??m, 53-100 ??m, 35-53 ??m, 10-35 ??m, 1-10 ??m) during 15-21 August 2004 to determine the distribution of microcystin (MC) in size fractionated lake samples and assess how net collections influence estimates of MC concentration. MC was detected in whole water (total) from 83% of takes sampled, and total MC values ranged from 0.1-7.0 ??g/L (mean = 0.8 ??g/L). On average, MC in the > 100 ??m size class comprised ???40% of total MC, while other individual size classes contributed 9-20% to total MC. MC values decreased with size class and were significantly greater in the >100 ??m size class (mean = 0.5 ??g /L) than the 35-53 ??m (mean = 0.1 ??g/L), 10-35 ??m (mean = 0.0 ??g/L), and 1-10 ??m (mean = 0.0 ??g/L) size classes (p < 0.01). MC values in nets with 100-??m, 53-??m, 35-??m, and 10-??m mesh were cumulatively summed to simulate the potential bias of measuring MC with various size plankton nets. On average, a 100-??m net underestimated total MC by 51%, compared to 37% for a 53-??m net, 28% for a 35-??m net, and 17% for a 10-??m net. While plankton nets consistently underestimated total MC, concentration of algae with net sieves allowed detection of MC at low levels (???0.01 ??/L); 93% of lakes had detectable levels of MC in concentrated samples. Thus, small mesh plankton nets are an option for documenting MC occurrence, but whole water samples should be collected to characterize total MC concentrations. ?? Copyright by the North American Lake Management Society 2007.

  4. Model selection with multiple regression on distance matrices leads to incorrect inferences.

    PubMed

    Franckowiak, Ryan P; Panasci, Michael; Jarvis, Karl J; Acuña-Rodriguez, Ian S; Landguth, Erin L; Fortin, Marie-Josée; Wagner, Helene H

    2017-01-01

    In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM) to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC), its small-sample correction (AICc), and the Bayesian information criterion (BIC) to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.

  5. Generalizing the Network Scale-Up Method: A New Estimator for the Size of Hidden Populations*

    PubMed Central

    Feehan, Dennis M.; Salganik, Matthew J.

    2018-01-01

    The network scale-up method enables researchers to estimate the size of hidden populations, such as drug injectors and sex workers, using sampled social network data. The basic scale-up estimator offers advantages over other size estimation techniques, but it depends on problematic modeling assumptions. We propose a new generalized scale-up estimator that can be used in settings with non-random social mixing and imperfect awareness about membership in the hidden population. Further, the new estimator can be used when data are collected via complex sample designs and from incomplete sampling frames. However, the generalized scale-up estimator also requires data from two samples: one from the frame population and one from the hidden population. In some situations these data from the hidden population can be collected by adding a small number of questions to already planned studies. For other situations, we develop interpretable adjustment factors that can be applied to the basic scale-up estimator. We conclude with practical recommendations for the design and analysis of future studies. PMID:29375167

  6. Mesh-size effects on drift sample composition as determined with a triple net sampler

    USGS Publications Warehouse

    Slack, K.V.; Tilley, L.J.; Kennelly, S.S.

    1991-01-01

    Nested nets of three different mesh apertures were used to study mesh-size effects on drift collected in a small mountain stream. The innermost, middle, and outermost nets had, respectively, 425 ??m, 209 ??m and 106 ??m openings, a design that reduced clogging while partitioning collections into three size groups. The open area of mesh in each net, from largest to smallest mesh opening, was 3.7, 5.7 and 8.0 times the area of the net mouth. Volumes of filtered water were determined with a flowmeter. The results are expressed as (1) drift retained by each net, (2) drift that would have been collected by a single net of given mesh size, and (3) the percentage of total drift (the sum of the catches from all three nets) that passed through the 425 ??m and 209 ??m nets. During a two day period in August 1986, Chironomidae larvae were dominant numerically in all 209 ??m and 106 ??m samples and midday 425 ??m samples. Large drifters (Ephemerellidae) occurred only in 425 ??m or 209 ??m nets, but the general pattern was an increase in abundance and number of taxa with decreasing mesh size. Relatively more individuals occurred in the larger mesh nets at night than during the day. The two larger mesh sizes retained 70% of the total sediment/detritus in the drift collections, and this decreased the rate of clogging of the 106 ??m net. If an objective of a sampling program is to compare drift density or drift rate between areas or sampling dates, the same mesh size should be used for all sample collection and processing. The mesh aperture used for drift collection should retain all species and life stages of significance in a study. The nested net design enables an investigator to test the adequacy of drift samples. ?? 1991 Kluwer Academic Publishers.

  7. Measurement of marine picoplankton cell size by using a cooled, charge-coupled device camera with image-analyzed fluorescence microscopy.

    PubMed Central

    Viles, C L; Sieracki, M E

    1992-01-01

    Accurate measurement of the biomass and size distribution of picoplankton cells (0.2 to 2.0 microns) is paramount in characterizing their contribution to the oceanic food web and global biogeochemical cycling. Image-analyzed fluorescence microscopy, usually based on video camera technology, allows detailed measurements of individual cells to be taken. The application of an imaging system employing a cooled, slow-scan charge-coupled device (CCD) camera to automated counting and sizing of individual picoplankton cells from natural marine samples is described. A slow-scan CCD-based camera was compared to a video camera and was superior for detecting and sizing very small, dim particles such as fluorochrome-stained bacteria. Several edge detection methods for accurately measuring picoplankton cells were evaluated. Standard fluorescent microspheres and a Sargasso Sea surface water picoplankton population were used in the evaluation. Global thresholding was inappropriate for these samples. Methods used previously in image analysis of nanoplankton cells (2 to 20 microns) also did not work well with the smaller picoplankton cells. A method combining an edge detector and an adaptive edge strength operator worked best for rapidly generating accurate cell sizes. A complete sample analysis of more than 1,000 cells averages about 50 min and yields size, shape, and fluorescence data for each cell. With this system, the entire size range of picoplankton can be counted and measured. Images PMID:1610183

  8. Impact of Strain Elastography on BI-RADS classification in small invasive lobular carcinoma.

    PubMed

    Chiorean, Angelica Rita; Szep, Mădălina Brîndușa; Feier, Diana Sorina; Duma, Magdalena; Chiorean, Marco Andrei; Strilciuc, Ștefan

    2018-05-02

    The purpose of this study was to determine the impact of strain elastography (SE) on the Breast Imaging Reporting Data System (BI-RADS) classification depending on invasive lobular carcinoma (ILC) lesion size. We performed a retrospective analysis on a sample of 152 female subjects examined between January 2010 - January 2017. SE was performed on all patients and ILC was subsequently diagnosed by surgical or ultrasound-guided biopsy. BI-RADS 1, 2, 6 and Tsukuba BGR cases were omitted. BI-RADS scores were recorded before and after the use of SE. The differences between scores were compared to the ILC tumor size using nonparametric tests and logistic binary regression. We controlled for age, focality, clinical assessment, heredo-collateral antecedents, B-mode and Doppler ultrasound examination. An ROC curve was used to identify the optimal cut-off point for size in relationship to BI-RADS classificationdifference using Youden's index. The histological subtypes of ILC lesions (n=180) included in the sample were luminal A (70%, n=126), luminal B (27.78%, n=50), triple negative (1.67%, n=3) and HER2+ (0.56%, n=1). The BI-RADS classification was higher when SE was performed (Z=- 6.629, p<0.000). The ROC curve identified a cut-off point of 13 mm for size in relationship to BI-RADS classification difference (J=0.670, p<0.000). Small ILC tumors were 17.92% more likely to influence BI-RADS classification (p<0.000). SE offers enhanced BI-RADS classification in small ILC tumors (<13 mm). Sonoelastography brings added value to B-mode breast ultrasound as an adjacent to mammography in breast cancer screening.

  9. Variation in the isotopic composition of striped weakfish Cynoscion guatucupa of the Southwest Atlantic Ocean in response to dietary shifts.

    PubMed

    Viola, M N Paso; Riccialdelli, L; Jaureguizar, A; Panarello, H O; Cappozzo, H L

    2018-05-01

    The aim of this study was to analyze the isotopic composition in muscle of striped weakfish Cynoscion guatucupa from Southwest Atlantic Ocean in order to evaluate a possible variation in δ13C and δ15N in response to dietary shifts that occur as animals grow. We also explored for isotopic evidence of differences between sample locations. The results showed an agreement between isotope analysis and previous conventional studies. Differences in the isotope composition between sampling location were not observed. A positive relation exists between isotope values and total body length of the animals. The Cluster analysis defined three groups of size classes, validated by the MDS. Differences in the relative consumption of prey species in each size class were also observed performing isotope mixing models (SIAR). Variation in δ15N among size classes would be associated with the consumption of a different type of prey as animals grow. Small striped weakfish feed on small crustaceans and progressively increase their consumption of fish (anchovy, Engraulis anchoita), increasing by this way their isotope values. On the other hand, differences in δ13C values seemed to be related to age-class specific spatial distribution patterns. Therefore, large and small striped weakfish remain specialized but feeding on different prey at different trophic levels. These results contribute to the study of the diet of striped weakfish, improve the isotopic ecology models and highlight on the importance of accounting for variation in the isotopic composition in response to dietary shifts with the size of one of the most important fishery resources in the region.

  10. A genetic investigation of Korean mummies from the Joseon Dynasty.

    PubMed

    Kim, Na Young; Lee, Hwan Young; Park, Myung Jin; Yang, Woo Ick; Shin, Kyoung-Jin

    2011-01-01

    Two Korean mummies (Danwoong-mirra and Yoon-mirra) found in medieval tombs in the central region of the Korean peninsula were genetically investigated by analysis of mitochondrial DNA (mtDNA), Y-chromosomal short tandem repeat (Y-STR) and the ABO gene. Danwoong-mirra is a male child mummy and Yoon-mirra is a pregnant female mummy, dating back about 550 and 450 years, respectively. DNA was extracted from soft tissues or bones. mtDNA, Y-STR and the ABO gene were amplified using a small size amplicon strategy and were analyzed according to the criteria of ancient DNA analysis to ensure that authentic DNA typing results were obtained from these ancient samples. Analysis of mtDNA hypervariable region sequence and coding region single nucleotide polymorphism (SNP) information revealed that Danwoong-mirra and Yoon-mirra belong to the East Asian mtDNA haplogroups D4 and M7c, respectively. The Y-STRs were analyzed in the male child mummy (Danwoong-mirra) using the AmpFlSTR® Yfiler PCR Amplification Kit and an in-house Y-miniplex plus system, and could be characterized in 4 loci with small amplicon size. The analysis of ABO gene SNPs using multiplex single base extension methods revealed that the ABO blood types of Danwoong-mirra and Yoon-mirra are AO01 and AB, respectively. The small size amplicon strategy and the authentication process in the present study will be effectively applicable to future genetic analyses of various forensic and ancient samples.

  11. Agile convolutional neural network for pulmonary nodule classification using CT images.

    PubMed

    Zhao, Xinzhuo; Liu, Liyao; Qi, Shouliang; Teng, Yueyang; Li, Jianhua; Qian, Wei

    2018-04-01

    To distinguish benign from malignant pulmonary nodules using CT images is critical for their precise diagnosis and treatment. A new Agile convolutional neural network (CNN) framework is proposed to conquer the challenges of a small-scale medical image database and the small size of the nodules, and it improves the performance of pulmonary nodule classification using CT images. A hybrid CNN of LeNet and AlexNet is constructed through combining the layer settings of LeNet and the parameter settings of AlexNet. A dataset with 743 CT image nodule samples is built up based on the 1018 CT scans of LIDC to train and evaluate the Agile CNN model. Through adjusting the parameters of the kernel size, learning rate, and other factors, the effect of these parameters on the performance of the CNN model is investigated, and an optimized setting of the CNN is obtained finally. After finely optimizing the settings of the CNN, the estimation accuracy and the area under the curve can reach 0.822 and 0.877, respectively. The accuracy of the CNN is significantly dependent on the kernel size, learning rate, training batch size, dropout, and weight initializations. The best performance is achieved when the kernel size is set to [Formula: see text], the learning rate is 0.005, the batch size is 32, and dropout and Gaussian initialization are used. This competitive performance demonstrates that our proposed CNN framework and the optimization strategy of the CNN parameters are suitable for pulmonary nodule classification characterized by small medical datasets and small targets. The classification model might help diagnose and treat pulmonary nodules effectively.

  12. A high-throughput assay format for determination of nitrate reductase and nitrite reductase enzyme activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McNally, N.; Liu, Xiang Yang; Choudary, P.V.

    1997-01-01

    The authors describe a microplate-based high-throughput procedure for rapid assay of the enzyme activities of nitrate reductase and nitrite reductase, using extremely small volumes of reagents. The new procedure offers the advantages of rapidity, small sample size-nanoliter volumes, low cost, and a dramatic increase in the throughput sample number that can be analyzed simultaneously. Additional advantages can be accessed by using microplate reader application software packages that permit assigning a group type to the wells, recording of the data on exportable data files and exercising the option of using the kinetic or endpoint reading modes. The assay can also bemore » used independently for detecting nitrite residues/contamination in environmental/food samples. 10 refs., 2 figs.« less

  13. Temporal dynamics of linkage disequilibrium in two populations of bighorn sheep

    PubMed Central

    Miller, Joshua M; Poissant, Jocelyn; Malenfant, René M; Hogg, John T; Coltman, David W

    2015-01-01

    Linkage disequilibrium (LD) is the nonrandom association of alleles at two markers. Patterns of LD have biological implications as well as practical ones when designing association studies or conservation programs aimed at identifying the genetic basis of fitness differences within and among populations. However, the temporal dynamics of LD in wild populations has received little empirical attention. In this study, we examined the overall extent of LD, the effect of sample size on the accuracy and precision of LD estimates, and the temporal dynamics of LD in two populations of bighorn sheep (Ovis canadensis) with different demographic histories. Using over 200 microsatellite loci, we assessed two metrics of multi-allelic LD, D′, and χ′2. We found that both populations exhibited high levels of LD, although the extent was much shorter in a native population than one that was founded via translocation, experienced a prolonged bottleneck post founding, followed by recent admixture. In addition, we observed significant variation in LD in relation to the sample size used, with small sample sizes leading to depressed estimates of the extent of LD but inflated estimates of background levels of LD. In contrast, there was not much variation in LD among yearly cross-sections within either population once sample size was accounted for. Lack of pronounced interannual variability suggests that researchers may not have to worry about interannual variation when estimating LD in a population and can instead focus on obtaining the largest sample size possible. PMID:26380673

  14. Effects of normalization on quantitative traits in association test

    PubMed Central

    2009-01-01

    Background Quantitative trait loci analysis assumes that the trait is normally distributed. In reality, this is often not observed and one strategy is to transform the trait. However, it is not clear how much normality is required and which transformation works best in association studies. Results We performed simulations on four types of common quantitative traits to evaluate the effects of normalization using the logarithm, Box-Cox, and rank-based transformations. The impact of sample size and genetic effects on normalization is also investigated. Our results show that rank-based transformation gives generally the best and consistent performance in identifying the causal polymorphism and ranking it highly in association tests, with a slight increase in false positive rate. Conclusion For small sample size or genetic effects, the improvement in sensitivity for rank transformation outweighs the slight increase in false positive rate. However, for large sample size and genetic effects, normalization may not be necessary since the increase in sensitivity is relatively modest. PMID:20003414

  15. Optimal Wavelength Selection on Hyperspectral Data with Fused Lasso for Biomass Estimation of Tropical Rain Forest

    NASA Astrophysics Data System (ADS)

    Takayama, T.; Iwasaki, A.

    2016-06-01

    Above-ground biomass prediction of tropical rain forest using remote sensing data is of paramount importance to continuous large-area forest monitoring. Hyperspectral data can provide rich spectral information for the biomass prediction; however, the prediction accuracy is affected by a small-sample-size problem, which widely exists as overfitting in using high dimensional data where the number of training samples is smaller than the dimensionality of the samples due to limitation of require time, cost, and human resources for field surveys. A common approach to addressing this problem is reducing the dimensionality of dataset. Also, acquired hyperspectral data usually have low signal-to-noise ratio due to a narrow bandwidth and local or global shifts of peaks due to instrumental instability or small differences in considering practical measurement conditions. In this work, we propose a methodology based on fused lasso regression that select optimal bands for the biomass prediction model with encouraging sparsity and grouping, which solves the small-sample-size problem by the dimensionality reduction from the sparsity and the noise and peak shift problem by the grouping. The prediction model provided higher accuracy with root-mean-square error (RMSE) of 66.16 t/ha in the cross-validation than other methods; multiple linear analysis, partial least squares regression, and lasso regression. Furthermore, fusion of spectral and spatial information derived from texture index increased the prediction accuracy with RMSE of 62.62 t/ha. This analysis proves efficiency of fused lasso and image texture in biomass estimation of tropical forests.

  16. Highly efficient and ultra-small volume separation by pressure-driven liquid chromatography in extended nanochannels.

    PubMed

    Ishibashi, Ryo; Mawatari, Kazuma; Kitamori, Takehiko

    2012-04-23

    The rapidly developing interest in nanofluidic analysis, which is used to examine liquids ranging in amounts from the attoliter to the femtoliter scale, correlates with the recent interest in decreased sample amounts, such as in the field of single-cell analysis. For general nanofluidic analysis, the fact that a pressure-driven flow does not limit the choice of solvents (aqueous or organic) is important. This study shows the first pressure-driven liquid chromatography technique that enables separation of atto- to femtoliter sample volumes, with a high separation efficiency within a few seconds. The apparent diffusion coefficient measurement of the unretentive sample suggests that there is no increase in the viscosity of toluene in the extended nanospace, unlike in aqueous solvents. Evaluation of the normal phase separation, therefore, should involve only the examination of the effect of the small size of the extended nanospace. Compared to a conventionally packed high-performance liquid chromatography column, the separation here results in a faster separation (4 s) by 2 orders of magnitude, a smaller injection volume (10(0) fL) by 9 orders, and a higher separation efficiency (440,000 plates/m) by 1 order. Moreover, the separation behavior agrees with the theory showing that this high efficiency was due to the small and controlled size of the separation channel, where the diffusion through the channel depth direction is fast enough to be neglected. Our chip-based platform should allow direct and real-time analysis or screening of ultralow volume of sample. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Neighbourhood deprivation and the price and availability of fruit and vegetables in Scotland.

    PubMed

    Cummins, S; Smith, D M; Aitken, Z; Dawson, J; Marshall, D; Sparks, L; Anderson, A S

    2010-10-01

    Previous research has suggested that fruits and vegetables are more expensive and less readily available in more deprived communities. However, this evidence is mainly based on small samples drawn from specific communities often located in urban settings and thus is not generalisable to national contexts. The present study explores the influence of neighbourhood deprivation and local retail structure on the price and availability of fruit and vegetables in a sample of areas representing the diversity of urban-rural environments across Scotland, UK. A sample of 310 stores located in 10 diverse areas of Scotland was surveyed and data on the price and availability of a basket of 15 fruit and vegetable items were collected. The data were analysed to identify the influence of store type and neighbourhood deprivation on the price and availability of fruits and vegetables. Neighbourhood deprivation and store type did not significantly predict the price of a basket of fruit and vegetables within the sample, although baskets did decrease in price as store size increased. The highest prices were found in the smallest stores located in the most deprived areas. Availability of fruit and vegetables is lower in small shops located within deprived neighbourhoods compared to similar shops in affluent areas. Overall, availability increases with increasing store size. Availability of fruit and vegetables significantly varies by neighbourhood deprivation in small stores. Policies aimed at promoting sales of fruit and vegetable in these outlets may benefit residents in deprived areas. © 2010 The Authors. Journal compilation © 2010 The British Dietetic Association Ltd.

  18. Inadequacy of Conventional Grab Sampling for Remediation Decision-Making for Metal Contamination at Small-Arms Ranges.

    PubMed

    Clausen, J L; Georgian, T; Gardner, K H; Douglas, T A

    2018-01-01

    Research shows grab sampling is inadequate for evaluating military ranges contaminated with energetics because of their highly heterogeneous distribution. Similar studies assessing the heterogeneous distribution of metals at small-arms ranges (SAR) are lacking. To address this we evaluated whether grab sampling provides appropriate data for performing risk analysis at metal-contaminated SARs characterized with 30-48 grab samples. We evaluated the extractable metal content of Cu, Pb, Sb, and Zn of the field data using a Monte Carlo random resampling with replacement (bootstrapping) simulation approach. Results indicate the 95% confidence interval of the mean for Pb (432 mg/kg) at one site was 200-700 mg/kg with a data range of 5-4500 mg/kg. Considering the U.S. Environmental Protection Agency screening level for lead is 400 mg/kg, the necessity of cleanup at this site is unclear. Resampling based on populations of 7 and 15 samples, a sample size more realistic for the area yielded high false negative rates.

  19. GUIDELINES FOR THE APPLICATION OF SEM/EDX ANALYTICAL TECHNIQUES FOR FINE AND COARSE PM SAMPLES

    EPA Science Inventory

    Scanning Electron Microscopy (SEM) coupled with Energy-Dispersive X-ray analysis (EDX) is a powerful tool in the characterization and source apportionment of environmental particulate matter (PM), providing size, chemistry, and morphology of particles as small as a few tenths ...

  20. Partial Least Square Analyses of Landscape and Surface Water Biota Associations in the Savannah River Basin

    EPA Science Inventory

    Ecologists are often faced with problem of small sample size, correlated and large number of predictors, and high noise-to-signal relationships. This necessitates excluding important variables from the model when applying standard multiple or multivariate regression analyses. In ...

  1. Surface enhanced Raman spectroscopy: A review of recent applications in forensic science.

    PubMed

    Fikiet, Marisia A; Khandasammy, Shelby R; Mistek, Ewelina; Ahmed, Yasmine; Halámková, Lenka; Bueno, Justin; Lednev, Igor K

    2018-05-15

    Surface enhanced Raman spectroscopy has many advantages over its parent technique of Raman spectroscopy. Some of these advantages such as increased sensitivity and selectivity and therefore the possibility of small sample sizes and detection of small concentrations are invaluable in the field of forensics. A variety of new SERS surfaces and novel approaches are presented here on a wide range of forensically relevant topics. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Localised estimates and spatial mapping of poverty incidence in the state of Bihar in India—An application of small area estimation techniques

    PubMed Central

    Aditya, Kaustav; Sud, U. C.

    2018-01-01

    Poverty affects many people, but the ramifications and impacts affect all aspects of society. Information about the incidence of poverty is therefore an important parameter of the population for policy analysis and decision making. In order to provide specific, targeted solutions when addressing poverty disadvantage small area statistics are needed. Surveys are typically designed and planned to produce reliable estimates of population characteristics of interest mainly at higher geographic area such as national and state level. Sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. In many instances estimates are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct survey estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This paper describes an application of small area estimation (SAE) approach to improve the precision of estimates of poverty incidence at district level in the State of Bihar in India by linking data from the Household Consumer Expenditure Survey 2011–12 of NSSO and the Population Census 2011. The results show that the district level estimates generated by SAE method are more precise and representative. In contrast, the direct survey estimates based on survey data alone are less stable. PMID:29879202

  3. Localised estimates and spatial mapping of poverty incidence in the state of Bihar in India-An application of small area estimation techniques.

    PubMed

    Chandra, Hukum; Aditya, Kaustav; Sud, U C

    2018-01-01

    Poverty affects many people, but the ramifications and impacts affect all aspects of society. Information about the incidence of poverty is therefore an important parameter of the population for policy analysis and decision making. In order to provide specific, targeted solutions when addressing poverty disadvantage small area statistics are needed. Surveys are typically designed and planned to produce reliable estimates of population characteristics of interest mainly at higher geographic area such as national and state level. Sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. In many instances estimates are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct survey estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This paper describes an application of small area estimation (SAE) approach to improve the precision of estimates of poverty incidence at district level in the State of Bihar in India by linking data from the Household Consumer Expenditure Survey 2011-12 of NSSO and the Population Census 2011. The results show that the district level estimates generated by SAE method are more precise and representative. In contrast, the direct survey estimates based on survey data alone are less stable.

  4. Investigating the relative permeability behavior of microporosity-rich carbonates and tight sandstones with multiscale pore network models

    NASA Astrophysics Data System (ADS)

    Bultreys, Tom; Stappen, Jeroen Van; Kock, Tim De; Boever, Wesley De; Boone, Marijn A.; Hoorebeke, Luc Van; Cnudde, Veerle

    2016-11-01

    The relative permeability behavior of rocks with wide ranges of pore sizes is in many cases still poorly understood and is difficult to model at the pore scale. In this work, we investigate the capillary pressure and relative permeability behavior of three outcrop carbonates and two tight reservoir sandstones with wide, multimodal pore size distributions. To examine how the drainage and imbibition properties of these complex rock types are influenced by the connectivity of macropores to each other and to zones with unresolved small-scale porosity, we apply a previously presented microcomputed-tomography-based multiscale pore network model to these samples. The sensitivity to the properties of the small-scale porosity is studied by performing simulations with different artificial sphere-packing-based networks as a proxy for these pores. Finally, the mixed-wet water-flooding behavior of the samples is investigated, assuming different wettability distributions for the microporosity and macroporosity. While this work is not an attempt to perform predictive modeling, it seeks to qualitatively explain the behavior of the investigated samples and illustrates some of the most recent developments in multiscale pore network modeling.

  5. Differential risk of injury in child occupants by passenger car classification.

    PubMed

    Kallan, Michael J; Durbin, Dennis R; Elliott, Michael R; Menon, Rajiv A; Winston, Flaura K

    2003-01-01

    In the United States, passenger cars are the most common passenger vehicle, yet they vary widely in size and crashworthiness. Using data collected from a population-based sample of crashes in State Farm-insured vehicles, we quantified the risk of injury to child occupants by passenger car size and classification. Injury risk is predicted by vehicle weight; however, there is an increased risk in both Large vs. Luxury and Sports vs. Small cars, despite similar average vehicle weights in both comparisons. Parents who are purchasing passenger cars should strongly consider the size of the vehicle and its crashworthiness.

  6. Differential Risk of Injury in Child Occupants by Passenger Car Classification

    PubMed Central

    Kallan, Michael J.; Durbin, Dennis R.; Elliott, Michael R.; Menon, Rajiv A.; Winston, Flaura K.

    2003-01-01

    In the United States, passenger cars are the most common passenger vehicle, yet they vary widely in size and crashworthiness. Using data collected from a population-based sample of crashes in State Farm-insured vehicles, we quantified the risk of injury to child occupants by passenger car size and classification. Injury risk is predicted by vehicle weight; however, there is an increased risk in both Large vs. Luxury and Sports vs. Small cars, despite similar average vehicle weights in both comparisons. Parents who are purchasing passenger cars should strongly consider the size of the vehicle and its crashworthiness. PMID:12941234

  7. Synthesis and characterization of mesoporous ZnS with narrow size distribution of small pores

    NASA Astrophysics Data System (ADS)

    Nistor, L. C.; Mateescu, C. D.; Birjega, R.; Nistor, S. V.

    2008-08-01

    Pure, nanocrystalline cubic ZnS forming a stable mesoporous structure was synthesized at room temperature by a non-toxic surfactant-assisted liquid liquid reaction, in the 9.5 10.5 pH range of values. The appearance of an X-ray diffraction (XRD) peak in the region of very small angles (˜ 2°) reveals the presence of a porous material with a narrow pore size distribution, but with an irregular arrangement of the pores, a so-called worm hole or sponge-like material. The analysis of the wide angle XRD diffractograms shows the building blocks to be ZnS nanocrystals with cubic structure and average diameter of 2 nm. Transmission electron microscopy (TEM) investigations confirm the XRD results; ZnS crystallites of 2.5 nm with cubic (blende) structure are the building blocks of the pore walls with pore sizes from 1.9 to 2.5 nm, and a broader size distribution for samples with smaller pores. Textural measurements (N2 adsorption desorption isotherms) confirm the presence of mesoporous ZnS with a narrow range of small pore sizes. The relatively lower surface area of around 100 m2/g is attributed to some remaining organic molecules, which are filling the smallest pores. Their presence, confirmed by IR spectroscopy, seems to be responsible for the high stability of the resulting mesoporous ZnS as well.

  8. Extent of genome-wide linkage disequilibrium in Australian Holstein-Friesian cattle based on a high-density SNP panel.

    PubMed

    Khatkar, Mehar S; Nicholas, Frank W; Collins, Andrew R; Zenger, Kyall R; Cavanagh, Julie A L; Barris, Wes; Schnabel, Robert D; Taylor, Jeremy F; Raadsma, Herman W

    2008-04-24

    The extent of linkage disequilibrium (LD) within a population determines the number of markers that will be required for successful association mapping and marker-assisted selection. Most studies on LD in cattle reported to date are based on microsatellite markers or small numbers of single nucleotide polymorphisms (SNPs) covering one or only a few chromosomes. This is the first comprehensive study on the extent of LD in cattle by analyzing data on 1,546 Holstein-Friesian bulls genotyped for 15,036 SNP markers covering all regions of all autosomes. Furthermore, most studies in cattle have used relatively small sample sizes and, consequently, may have had biased estimates of measures commonly used to describe LD. We examine minimum sample sizes required to estimate LD without bias and loss in accuracy. Finally, relatively little information is available on comparative LD structures including other mammalian species such as human and mouse, and we compare LD structure in cattle with public-domain data from both human and mouse. We computed three LD estimates, D', Dvol and r2, for 1,566,890 syntenic SNP pairs and a sample of 365,400 non-syntenic pairs. Mean D' is 0.189 among syntenic SNPs, and 0.105 among non-syntenic SNPs; mean r2 is 0.024 among syntenic SNPs and 0.0032 among non-syntenic SNPs. All three measures of LD for syntenic pairs decline with distance; the decline is much steeper for r2 than for D' and Dvol. The value of D' and Dvol are quite similar. Significant LD in cattle extends to 40 kb (when estimated as r2) and 8.2 Mb (when estimated as D'). The mean values for LD at large physical distances are close to those for non-syntenic SNPs. Minor allelic frequency threshold affects the distribution and extent of LD. For unbiased and accurate estimates of LD across marker intervals spanning < 1 kb to > 50 Mb, minimum sample sizes of 400 (for D') and 75 (for r2) are required. The bias due to small samples sizes increases with inter-marker interval. LD in cattle is much less extensive than in a mouse population created from crossing inbred lines, and more extensive than in humans. For association mapping in Holstein-Friesian cattle, for a given design, at least one SNP is required for each 40 kb, giving a total requirement of at least 75,000 SNPs for a low power whole-genome scan (median r2 > 0.19) and up to 300,000 markers at 10 kb intervals for a high power genome scan (median r2 > 0.62). For estimation of LD by D' and Dvol with sufficient precision, a sample size of at least 400 is required, whereas for r2 a minimum sample of 75 is adequate.

  9. Why large cells dominate estuarine phytoplankton

    USGS Publications Warehouse

    Cloern, James E.

    2018-01-01

    Surveys across the world oceans have shown that phytoplankton biomass and production are dominated by small cells (picoplankton) where nutrient concentrations are low, but large cells (microplankton) dominate when nutrient-rich deep water is mixed to the surface. I analyzed phytoplankton size structure in samples collected over 25 yr in San Francisco Bay, a nutrient-rich estuary. Biomass was dominated by large cells because their biomass selectively grew during blooms. Large-cell dominance appears to be a characteristic of ecosystems at the land–sea interface, and these places may therefore function as analogs to oceanic upwelling systems. Simulations with a size-structured NPZ model showed that runs of positive net growth rate persisted long enough for biomass of large, but not small, cells to accumulate. Model experiments showed that small cells would dominate in the absence of grazing, at lower nutrient concentrations, and at elevated (+5°C) temperatures. Underlying these results are two fundamental scaling laws: (1) large cells are grazed more slowly than small cells, and (2) grazing rate increases with temperature faster than growth rate. The model experiments suggest testable hypotheses about phytoplankton size structure at the land–sea interface: (1) anthropogenic nutrient enrichment increases cell size; (2) this response varies with temperature and only occurs at mid-high latitudes; (3) large-cell blooms can only develop when temperature is below a critical value, around 15°C; (4) cell size diminishes along temperature gradients from high to low latitudes; and (5) large-cell blooms will diminish or disappear where planetary warming increases temperature beyond their critical threshold.

  10. Small amount of water induced preparation of several morphologies for InBO3:Eu3+ phosphor via a facile boric acid flux method and their luminescent properties

    NASA Astrophysics Data System (ADS)

    Ding, Wen; Liang, Pan; Liu, Zhi-Hong

    2017-05-01

    Four kinds of morphologies for InBO3:Eu3+ phosphor have been prepared via a facile boric acid flux method only by adjusting the small amount of added water. The prepared samples have been characterized by XRD, FT-IR, and SEM. It was found that the size and morphology of the samples could be effectively controlled by adjusting reaction temperature, reaction time, especially the small amount of added water, which plays an extremely critical role in the controlling morphology. The possible growth mechanisms for microsphere and flower-like morphologies were further discussed on the basis of time-dependent experiments. Furthermore, the luminescence properties of prepared InBO3:Eu3+ samples have been investigated by photoluminescence (PL) spectra. The results show that the InBO3:Eu3+ phosphors show strong orange emissions under ultraviolet excitation at 237 nm. The monodisperse microsphere sample possesses the highest PL intensity among above four morphologies, which can be used as a potential orange luminescent material.

  11. Tracing Staphylococcus aureus in small and medium-sized food-processing factories on the basis of molecular sub-species typing.

    PubMed

    Koreňová, Janka; Rešková, Zuzana; Véghová, Adriana; Kuchta, Tomáš

    2015-01-01

    Contamination by Staphylococcus aureus of the production environment of three small or medium-sized food-processing factories in Slovakia was investigated on the basis of sub-species molecular identification by multiple locus variable number of tandem repeats analysis (MLVA). On the basis of MLVA profiling, bacterial isolates were assigned to 31 groups. Data from repeated samplings over a period of 3 years facilitated to draw spatial and temporal maps of the contamination routes for individual factories, as well as identification of potential persistent strains. Information obtained by MLVA typing allowed to identify sources and routes of contamination and, subsequently, will allow to optimize the technical and sanitation measures to ensure hygiene.

  12. Microfluidic interconnects

    DOEpatents

    Benett, William J.; Krulevitch, Peter A.

    2001-01-01

    A miniature connector for introducing microliter quantities of solutions into microfabricated fluidic devices, and which incorporates a molded ring or seal set into a ferrule cartridge, with or without a compression screw. The fluidic connector, for example, joins standard high pressure liquid chromatography (HPLC) tubing to 1 mm diameter holes in silicon or glass, enabling ml-sized volumes of sample solutions to be merged with .mu.l-sized devices. The connector has many features, including ease of connect and disconnect; a small footprint which enables numerous connectors to be located in a small area; low dead volume; helium leak-tight; and tubing does not twist during connection. Thus the connector enables easy and effective change of microfluidic devices and introduction of different solutions in the devices.

  13. Casein polymorphism heterogeneity influences casein micelle size in milk of individual cows.

    PubMed

    Day, L; Williams, R P W; Otter, D; Augustin, M A

    2015-06-01

    Milk samples from individual cows producing small (148-155 nm) or large (177-222 nm) casein micelles were selected to investigate the relationship between the individual casein proteins, specifically κ- and β-casein phenotypes, and casein micelle size. Only κ-casein AA and β-casein A1A1, A1A2 and A2A2 phenotypes were found in the large casein micelle group. Among the small micelle group, both κ-casein and β-casein phenotypes were more diverse. κ-Casein AB was the dominant phenotype, and 3 combinations (AA, AB, and BB) were present in the small casein micelle group. A considerable mix of β-casein phenotypes was found, including B and I variants, which were only found in the small casein micelle group. The relative amount of κ-casein to total casein was significantly higher in the small micelle group, and the nonglycosylated and glycosylated κ-casein contents were higher in the milks with small casein micelles (primarily with κ-casein AB and BB variants) compared with the large micelle group. The ratio of glycosylated to nonglycosylated κ-casein was higher in the milks with small casein micelles compared with the milks with large casein micelles. This suggests that although the amount of κ-casein (both glycosylated and nonglycosylated) is associated with micelle size, an increased proportion of glycosylated κ-casein could be a more important and favorable factor for small micelle size. This suggests that the increased spatial requirement due to addition of the glycosyl group with increasing extent of glycosylation of κ-casein is one mechanism that controls casein micelle assembly and growth. In addition, increased electrostatic repulsion due to the sialyl residues on the glycosyl group could be a contributory factor. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  14. Measurement of particle size distribution of soil and selected aggregate sizes using the hydrometer method and laser diffractometry

    NASA Astrophysics Data System (ADS)

    Guzmán, G.; Gómez, J. A.; Giráldez, J. V.

    2010-05-01

    Soil particle size distribution has been traditionally determined by the hydrometer or the sieve-pipette methods, both of them time consuming and requiring a relatively large soil sample. This might be a limitation in situations, such as for instance analysis of suspended sediment, when the sample is small. A possible alternative to these methods are the optical techniques such as laser diffractometry. However the literature indicates that the use of this technique as an alternative to traditional methods is still limited, because the difficulty in replicating the results obtained with the standard methods. In this study we present the percentages of soil grain size determined using laser diffractometry within ranges set between 0.04 - 2000 μm. A Beckman-Coulter ® LS-230 with a 750 nm laser beam and software version 3.2 in five soils, representative of southern Spain: Alameda, Benacazón, Conchuela, Lanjarón and Pedrera. In three of the studied soils (Alameda, Benacazón and Conchuela) the particle size distribution of each aggregate size class was also determined. Aggregate size classes were obtained by dry sieve analysis using a Retsch AS 200 basic ®. Two hundred grams of air dried soil were sieved during 150 s, at amplitude 2 mm, getting nine different sizes between 2000 μm and 10 μm. Analyses were performed by triplicate. The soil sample preparation was also adapted to our conditions. A small amount each soil sample (less than 1 g) was transferred to the fluid module full of running water and disaggregated by ultrasonication at energy level 4 and 80 ml of sodium hexametaphosphate solution during 580 seconds. Two replicates of each sample were performed. Each measurement was made for a 90 second reading at a pump speed of 62. After the laser diffractometry analysis, each soil and its aggregate classes were processed calibrating its own optical model fitting the optical parameters that mainly depends on the color and the shape of the analyzed particle. As a second alternative a unique optical model valid for a broad range of soils developed by the Department of Soil, Water, and Environmental Science of the University of Arizona (personal communication, already submitted) was tested. The results were compared with the particle size distribution measured in the same soils and aggregate classes using the hydrometer method. Preliminary results indicate a better calibration of the technique using the optical model of the Department of Soil, Water, and Environmental Science of the University of Arizona, which obtained a good correlations (r2>0.85). This result suggests that with an appropriate calibration of the optical model laser diffractometry might provide a reliable soil particle characterization.

  15. Time-integrated sampling of fluvial suspended sediment: a simple methodology for small catchments

    NASA Astrophysics Data System (ADS)

    Phillips, J. M.; Russell, M. A.; Walling, D. E.

    2000-10-01

    Fine-grained (<62·5 µm) suspended sediment transport is a key component of the geochemical flux in most fluvial systems. The highly episodic nature of suspended sediment transport imposes a significant constraint on the design of sampling strategies aimed at characterizing the biogeochemical properties of such sediment. A simple sediment sampler, utilizing ambient flow to induce sedimentation by settling, is described. The sampler can be deployed unattended in small streams to collect time-integrated suspended sediment samples. In laboratory tests involving chemically dispersed sediment, the sampler collected a maximum of 71% of the input sample mass. However, under natural conditions, the existence of composite particles or flocs can be expected to increase significantly the trapping efficiency. Field trials confirmed that the particle size composition and total carbon content of the sediment collected by the sampler were representative statistically of the ambient suspended sediment.

  16. At convenience and systematic random sampling: effects on the prognostic value of nuclear area assessments in breast cancer patients.

    PubMed

    Jannink, I; Bennen, J N; Blaauw, J; van Diest, P J; Baak, J P

    1995-01-01

    This study compares the influence of two different nuclear sampling methods on the prognostic value of assessments of mean and standard deviation of nuclear area (MNA, SDNA) in 191 consecutive invasive breast cancer patients with long term follow up. The first sampling method used was 'at convenience' sampling (ACS); the second, systematic random sampling (SRS). Both sampling methods were tested with a sample size of 50 nuclei (ACS-50 and SRS-50). To determine whether, besides the sampling methods, sample size had impact on prognostic value as well, the SRS method was also tested using a sample size of 100 nuclei (SRS-100). SDNA values were systematically lower for ACS, obviously due to (unconsciously) not including small and large nuclei. Testing prognostic value of a series of cut off points, MNA and SDNA values assessed by the SRS method were prognostically significantly stronger than the values obtained by the ACS method. This was confirmed in Cox regression analysis. For the MNA, the Mantel-Cox p-values from SRS-50 and SRS-100 measurements were not significantly different. However, for the SDNA, SRS-100 yielded significantly lower p-values than SRS-50. In conclusion, compared with the 'at convenience' nuclear sampling method, systematic random sampling of nuclei is not only superior with respect to reproducibility of results, but also provides a better prognostic value in patients with invasive breast cancer.

  17. Determining chewing efficiency using a solid test food and considering all phases of mastication.

    PubMed

    Liu, Ting; Wang, Xinmiao; Chen, Jianshe; van der Glas, Hilbert W

    2018-07-01

    Following chewing a solid food, the median particle size, X 50 , is determined after N chewing cycles, by curve-fitting of the particle size distribution. Reduction of X 50 with N is traditionally followed from N ≥ 15-20 cycles when using the artificial test food Optosil ® , because of initially unreliable values of X 50 . The aims of the study were (i) to enable testing at small N-values by using initial particles of appropriate size, shape and amount, and (ii) to compare measures of chewing ability, i.e. chewing efficiency (N needed to halve the initial particle size, N(1/2-Xo)) and chewing performance (X 50 at a particular N-value, X 50,N ). 8 subjects with a natural dentition chewed 4 types of samples of Optosil particles: (1) 8 cubes of 8 mm, border size relative to bin size (traditional test), (2) 9 half-cubes of 9.6 mm, mid-size; similar sample volume, (3) 4 half-cubes of 9.6 mm, and 2 half-cubes of 9.6 mm; reduced particle number and sample volume. All samples were tested with 4 N-values. Curve-fitting with a 2nd order polynomial function yielded log(X 50 )-log(N) relationships, after which N(1/2-Xo) and X 50,N were obtained. Reliable X 50 -values are obtained for all N-values when using half-cubes with a mid-size relative to bin sizes. By using 2 or 4 half-cubes, determination of N(1/2-Xo) or X 50,N needs less chewing cycles than traditionally. Chewing efficiency is preferable over chewing performance because of a comparison of inter-subject chewing ability at the same stage of food comminution and constant intra-subject and inter-subject ratios between and within samples respectively. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Statistical computation of tolerance limits

    NASA Technical Reports Server (NTRS)

    Wheeler, J. T.

    1993-01-01

    Based on a new theory, two computer codes were developed specifically to calculate the exact statistical tolerance limits for normal distributions within unknown means and variances for the one-sided and two-sided cases for the tolerance factor, k. The quantity k is defined equivalently in terms of the noncentral t-distribution by the probability equation. Two of the four mathematical methods employ the theory developed for the numerical simulation. Several algorithms for numerically integrating and iteratively root-solving the working equations are written to augment the program simulation. The program codes generate some tables of k's associated with the varying values of the proportion and sample size for each given probability to show accuracy obtained for small sample sizes.

  19. Wave transmission approach based on modal analysis for embedded mechanical systems

    NASA Astrophysics Data System (ADS)

    Cretu, Nicolae; Nita, Gelu; Ioan Pop, Mihail

    2013-09-01

    An experimental method for determining the phase velocity in small solid samples is proposed. The method is based on measuring the resonant frequencies of a binary or ternary solid elastic system comprising the small sample of interest and a gauge material of manageable size. The wave transmission matrix of the combined system is derived and the theoretical values of its eigenvalues are used to determine the expected eigenfrequencies that, equated with the measured values, allow for the numerical estimation of the phase velocities in both materials. The known phase velocity of the gauge material is then used to asses the accuracy of the method. Using computer simulation and the experimental values for phase velocities, the theoretical values for the eigenfrequencies of the eigenmodes of the embedded elastic system are obtained, to validate the method. We conclude that the proposed experimental method may be reliably used to determine the elastic properties of small solid samples whose geometries do not allow a direct measurement of their resonant frequencies.

  20. Mid-level perceptual features distinguish objects of different real-world sizes.

    PubMed

    Long, Bria; Konkle, Talia; Cohen, Michael A; Alvarez, George A

    2016-01-01

    Understanding how perceptual and conceptual representations are connected is a fundamental goal of cognitive science. Here, we focus on a broad conceptual distinction that constrains how we interact with objects--real-world size. Although there appear to be clear perceptual correlates for basic-level categories (apples look like other apples, oranges look like other oranges), the perceptual correlates of broader categorical distinctions are largely unexplored, i.e., do small objects look like other small objects? Because there are many kinds of small objects (e.g., cups, keys), there may be no reliable perceptual features that distinguish them from big objects (e.g., cars, tables). Contrary to this intuition, we demonstrated that big and small objects have reliable perceptual differences that can be extracted by early stages of visual processing. In a series of visual search studies, participants found target objects faster when the distractor objects differed in real-world size. These results held when we broadly sampled big and small objects, when we controlled for low-level features and image statistics, and when we reduced objects to texforms--unrecognizable textures that loosely preserve an object's form. However, this effect was absent when we used more basic textures. These results demonstrate that big and small objects have reliably different mid-level perceptual features, and suggest that early perceptual information about broad-category membership may influence downstream object perception, recognition, and categorization processes. (c) 2015 APA, all rights reserved).

Top