Sample records for sample size considerations

  1. Simple and multiple linear regression: sample size considerations.

    PubMed

    Hanley, James A

    2016-11-01

    The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Considerations for throughfall chemistry sample-size determination

    Treesearch

    Pamela J. Edwards; Paul Mohai; Howard G. Halverson; David R. DeWalle

    1989-01-01

    Both the number of trees sampled per species and the number of sampling points under each tree are important throughfall sampling considerations. Chemical loadings obtained from an urban throughfall study were used to evaluate the relative importance of both of these sampling factors in tests for determining species' differences. Power curves for detecting...

  3. On Using a Pilot Sample Variance for Sample Size Determination in the Detection of Differences between Two Means: Power Consideration

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2013-01-01

    The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…

  4. A Typology of Mixed Methods Sampling Designs in Social Science Research

    ERIC Educational Resources Information Center

    Onwuegbuzie, Anthony J.; Collins, Kathleen M. T.

    2007-01-01

    This paper provides a framework for developing sampling designs in mixed methods research. First, we present sampling schemes that have been associated with quantitative and qualitative research. Second, we discuss sample size considerations and provide sample size recommendations for each of the major research designs for quantitative and…

  5. Sample Size Requirements for Structural Equation Models: An Evaluation of Power, Bias, and Solution Propriety

    ERIC Educational Resources Information Center

    Wolf, Erika J.; Harrington, Kelly M.; Clark, Shaunna L.; Miller, Mark W.

    2013-01-01

    Determining sample size requirements for structural equation modeling (SEM) is a challenge often faced by investigators, peer reviewers, and grant writers. Recent years have seen a large increase in SEMs in the behavioral science literature, but consideration of sample size requirements for applied SEMs often relies on outdated rules-of-thumb.…

  6. Considerations in Forest Growth Estimation Between Two Measurements of Mapped Forest Inventory Plots

    Treesearch

    Michael T. Thompson

    2006-01-01

    Several aspects of the enhanced Forest Inventory and Analysis (FIA) program?s national plot design complicate change estimation. The design incorporates up to three separate plot sizes (microplot, subplot, and macroplot) to sample trees of different sizes. Because multiple plot sizes are involved, change estimators designed for polyareal plot sampling, such as those...

  7. Sample size considerations using mathematical models: an example with Chlamydia trachomatis infection and its sequelae pelvic inflammatory disease.

    PubMed

    Herzog, Sereina A; Low, Nicola; Berghold, Andrea

    2015-06-19

    The success of an intervention to prevent the complications of an infection is influenced by the natural history of the infection. Assumptions about the temporal relationship between infection and the development of sequelae can affect the predicted effect size of an intervention and the sample size calculation. This study investigates how a mathematical model can be used to inform sample size calculations for a randomised controlled trial (RCT) using the example of Chlamydia trachomatis infection and pelvic inflammatory disease (PID). We used a compartmental model to imitate the structure of a published RCT. We considered three different processes for the timing of PID development, in relation to the initial C. trachomatis infection: immediate, constant throughout, or at the end of the infectious period. For each process we assumed that, of all women infected, the same fraction would develop PID in the absence of an intervention. We examined two sets of assumptions used to calculate the sample size in a published RCT that investigated the effect of chlamydia screening on PID incidence. We also investigated the influence of the natural history parameters of chlamydia on the required sample size. The assumed event rates and effect sizes used for the sample size calculation implicitly determined the temporal relationship between chlamydia infection and PID in the model. Even small changes in the assumed PID incidence and relative risk (RR) led to considerable differences in the hypothesised mechanism of PID development. The RR and the sample size needed per group also depend on the natural history parameters of chlamydia. Mathematical modelling helps to understand the temporal relationship between an infection and its sequelae and can show how uncertainties about natural history parameters affect sample size calculations when planning a RCT.

  8. Grays Harbor and Chehalis River Improvements to Navigation Environmental Studies. Grays Harbor Ocean Disposal Study. Literature Review and Preliminary Benthic Sampling,

    DTIC Science & Technology

    1980-05-01

    transects extending approximately 16 kilometers from the mouth of Grays Harbor. Sub- samples were taken for grain size analysis and wood content. The...samples were thert was".d on a 1.0 mm screen to separate benthic organisms from non-living materials. Consideration of the grain size analysis ...Nutrients 17 B. Field Study 18 Methods 18 Grain Size Analysis 18 Wood Analysis 21 Wood Fragments 21 Sediment Types 21 Discussion 24 IV. BIOLOGICAL

  9. Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.

    PubMed

    Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto

    2007-07-01

    To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.

  10. Sample size considerations when groups are the appropriate unit of analyses

    PubMed Central

    Sadler, Georgia Robins; Ko, Celine Marie; Alisangco, Jennifer; Rosbrook, Bradley P.; Miller, Eric; Fullerton, Judith

    2007-01-01

    This paper discusses issues to be considered by nurse researchers when groups should be used as a unit of randomization. Advantages and disadvantages are presented, with statistical calculations needed to determine effective sample size. Examples of these concepts are presented using data from the Black Cosmetologists Promoting Health Program. Different hypothetical scenarios and their impact on sample size are presented. Given the complexity of calculating sample size when using groups as a unit of randomization, it’s advantageous for researchers to work closely with statisticians when designing and implementing studies that anticipate the use of groups as the unit of randomization. PMID:17693219

  11. Using Structural Equation Modeling to Assess Functional Connectivity in the Brain: Power and Sample Size Considerations

    ERIC Educational Resources Information Center

    Sideridis, Georgios; Simos, Panagiotis; Papanicolaou, Andrew; Fletcher, Jack

    2014-01-01

    The present study assessed the impact of sample size on the power and fit of structural equation modeling applied to functional brain connectivity hypotheses. The data consisted of time-constrained minimum norm estimates of regional brain activity during performance of a reading task obtained with magnetoencephalography. Power analysis was first…

  12. Planning Community-Based Assessments of HIV Educational Intervention Programs in Sub-Saharan Africa

    ERIC Educational Resources Information Center

    Kelcey, Ben; Shen, Zuchao

    2017-01-01

    A key consideration in planning studies of community-based HIV education programs is identifying a sample size large enough to ensure a reasonable probability of detecting program effects if they exist. Sufficient sample sizes for community- or group-based designs are proportional to the correlation or similarity of individuals within communities.…

  13. Statistical considerations in monitoring birds over large areas

    USGS Publications Warehouse

    Johnson, D.H.

    2000-01-01

    The proper design of a monitoring effort depends primarily on the objectives desired, constrained by the resources available to conduct the work. Typically, managers have numerous objectives, such as determining abundance of the species, detecting changes in population size, evaluating responses to management activities, and assessing habitat associations. A design that is optimal for one objective will likely not be optimal for others. Careful consideration of the importance of the competing objectives may lead to a design that adequately addresses the priority concerns, although it may not be optimal for any individual objective. Poor design or inadequate sample sizes may result in such weak conclusions that the effort is wasted. Statistical expertise can be used at several stages, such as estimating power of certain hypothesis tests, but is perhaps most useful in fundamental considerations of describing objectives and designing sampling plans.

  14. A sequential bioequivalence design with a potential ethical advantage.

    PubMed

    Fuglsang, Anders

    2014-07-01

    This paper introduces a two-stage approach for evaluation of bioequivalence, where, in contrast to the designs of Diane Potvin and co-workers, two stages are mandatory regardless of the data obtained at stage 1. The approach is derived from Potvin's method C. It is shown that under circumstances with relatively high variability and relatively low initial sample size, this method has an advantage over Potvin's approaches in terms of sample sizes while controlling type I error rates at or below 5% with a minute occasional trade-off in power. Ethically and economically, the method may thus be an attractive alternative to the Potvin designs. It is also shown that when using the method introduced here, average total sample sizes are rather independent of initial sample size. Finally, it is shown that when a futility rule in terms of sample size for stage 2 is incorporated into this method, i.e., when a second stage can be abolished due to sample size considerations, there is often an advantage in terms of power or sample size as compared to the previously published methods.

  15. Influence of multidroplet size distribution on icing collection efficiency

    NASA Technical Reports Server (NTRS)

    Chang, H.-P.; Kimble, K. R.; Frost, W.; Shaw, R. J.

    1983-01-01

    Calculation of collection efficiencies of two-dimensional airfoils for a monodispersed droplet icing cloud and a multidispersed droplet is carried out. Comparison is made with the experimental results reported in the NACA Technical Note series. The results of the study show considerably improved agreement with experiment when multidroplet size distributions are employed. The study then investigates the effect of collection efficiency on airborne particle droplet size sampling instruments. The biased effect introduced due to sampling from different collection volumes is predicted.

  16. Statistical considerations for agroforestry studies

    Treesearch

    James A. Baldwin

    1993-01-01

    Statistical topics that related to agroforestry studies are discussed. These included study objectives, populations of interest, sampling schemes, sample sizes, estimation vs. hypothesis testing, and P-values. In addition, a relatively new and very much improved histogram display is described.

  17. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    NASA Astrophysics Data System (ADS)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  18. Statistical Analysis Techniques for Small Sample Sizes

    NASA Technical Reports Server (NTRS)

    Navard, S. E.

    1984-01-01

    The small sample sizes problem which is encountered when dealing with analysis of space-flight data is examined. Because of such a amount of data available, careful analyses are essential to extract the maximum amount of information with acceptable accuracy. Statistical analysis of small samples is described. The background material necessary for understanding statistical hypothesis testing is outlined and the various tests which can be done on small samples are explained. Emphasis is on the underlying assumptions of each test and on considerations needed to choose the most appropriate test for a given type of analysis.

  19. Sample size requirements for the design of reliability studies: precision consideration.

    PubMed

    Shieh, Gwowen

    2014-09-01

    In multilevel modeling, the intraclass correlation coefficient based on the one-way random-effects model is routinely employed to measure the reliability or degree of resemblance among group members. To facilitate the advocated practice of reporting confidence intervals in future reliability studies, this article presents exact sample size procedures for precise interval estimation of the intraclass correlation coefficient under various allocation and cost structures. Although the suggested approaches do not admit explicit sample size formulas and require special algorithms for carrying out iterative computations, they are more accurate than the closed-form formulas constructed from large-sample approximations with respect to the expected width and assurance probability criteria. This investigation notes the deficiency of existing methods and expands the sample size methodology for the design of reliability studies that have not previously been discussed in the literature.

  20. Day and night variation in chemical composition and toxicological responses of size segregated urban air PM samples in a high air pollution situation

    NASA Astrophysics Data System (ADS)

    Jalava, P. I.; Wang, Q.; Kuuspalo, K.; Ruusunen, J.; Hao, L.; Fang, D.; Väisänen, O.; Ruuskanen, A.; Sippula, O.; Happo, M. S.; Uski, O.; Kasurinen, S.; Torvela, T.; Koponen, H.; Lehtinen, K. E. J.; Komppula, M.; Gu, C.; Jokiniemi, J.; Hirvonen, M.-R.

    2015-11-01

    Urban air particulate pollution is a known cause for adverse human health effects worldwide. China has encountered air quality problems in recent years due to rapid industrialization. Toxicological effects induced by particulate air pollution vary with particle sizes and season. However, it is not known how distinctively different photochemical activity and different emission sources during the day and the night affect the chemical composition of the PM size ranges and subsequently how it is reflected to the toxicological properties of the PM exposures. The particulate matter (PM) samples were collected in four different size ranges (PM10-2.5; PM2.5-1; PM1-0.2 and PM0.2) with a high volume cascade impactor. The PM samples were extracted with methanol, dried and thereafter used in the chemical and toxicological analyses. RAW264.7 macrophages were exposed to the particulate samples in four different doses for 24 h. Cytotoxicity, inflammatory parameters, cell cycle and genotoxicity were measured after exposure of the cells to particulate samples. Particles were characterized for their chemical composition, including ions, element and PAH compounds, and transmission electron microscopy (TEM) was used to take images of the PM samples. Chemical composition and the induced toxicological responses of the size segregated PM samples showed considerable size dependent differences as well as day to night variation. The PM10-2.5 and the PM0.2 samples had the highest inflammatory potency among the size ranges. Instead, almost all the PM samples were equally cytotoxic and only minor differences were seen in genotoxicity and cell cycle effects. Overall, the PM0.2 samples had the highest toxic potential among the different size ranges in many parameters. PAH compounds in the samples and were generally more abundant during the night than the day, indicating possible photo-oxidation of the PAH compounds due to solar radiation. This was reflected to different toxicity in the PM samples. Some of the day to night difference may have been caused also by differing wind directions transporting air masses from different emission sources during the day and the night. The present findings indicate the important role of the local particle sources and atmospheric processes on the health related toxicological properties of the PM. The varying toxicological responses evoked by the PM samples showed the importance of examining various particle sizes. Especially the detected considerable toxicological activity by PM0.2 size range suggests they're attributable to combustion sources, new particle formation and atmospheric processes.

  1. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. A general approach for sample size calculation for the three-arm 'gold standard' non-inferiority design.

    PubMed

    Stucke, Kathrin; Kieser, Meinhard

    2012-12-10

    In the three-arm 'gold standard' non-inferiority design, an experimental treatment, an active reference, and a placebo are compared. This design is becoming increasingly popular, and it is, whenever feasible, recommended for use by regulatory guidelines. We provide a general method to calculate the required sample size for clinical trials performed in this design. As special cases, the situations of continuous, binary, and Poisson distributed outcomes are explored. Taking into account the correlation structure of the involved test statistics, the proposed approach leads to considerable savings in sample size as compared with application of ad hoc methods for all three scale levels. Furthermore, optimal sample size allocation ratios are determined that result in markedly smaller total sample sizes as compared with equal assignment. As optimal allocation makes the active treatment groups larger than the placebo group, implementation of the proposed approach is also desirable from an ethical viewpoint. Copyright © 2012 John Wiley & Sons, Ltd.

  3. Detection of linkage between a quantitative trait and a marker locus by the lod score method: sample size and sampling considerations.

    PubMed

    Demenais, F; Lathrop, G M; Lalouel, J M

    1988-07-01

    A simulation study is here conducted to measure the power of the lod score method to detect linkage between a quantitative trait and a marker locus in various situations. The number of families necessary to detect such linkage with 80% power is assessed for different sets of parameters at the trait locus and different values of the recombination fraction. The effects of varying the mode of sampling families and the sibship size are also evaluated.

  4. Ratio of Cut Surface Area to Leaf Sample Volume for Water Potential Measurements by Thermocouple Psychrometers

    PubMed Central

    Walker, Sue; Oosterhuis, Derrick M.; Wiebe, Herman H.

    1984-01-01

    Evaporative losses from the cut edge of leaf samples are of considerable importance in measurements of leaf water potential using thermocouple psychrometers. The ratio of cut surface area to leaf sample volume (area to volume ratio) has been used to give an estimate of possible effects of evaporative loss in relation to sample size. A wide range of sample sizes with different area to volume ratios has been used. Our results using Glycine max L. Merr. cv Bragg indicate that leaf samples with area to volume values less than 0.2 square millimeter per cubic millimeter give psychrometric leaf water potential measurements that compare favorably with pressure chamber measurements. PMID:16663578

  5. Sample size considerations for clinical research studies in nuclear cardiology.

    PubMed

    Chiuzan, Cody; West, Erin A; Duong, Jimmy; Cheung, Ken Y K; Einstein, Andrew J

    2015-12-01

    Sample size calculation is an important element of research design that investigators need to consider in the planning stage of the study. Funding agencies and research review panels request a power analysis, for example, to determine the minimum number of subjects needed for an experiment to be informative. Calculating the right sample size is crucial to gaining accurate information and ensures that research resources are used efficiently and ethically. The simple question "How many subjects do I need?" does not always have a simple answer. Before calculating the sample size requirements, a researcher must address several aspects, such as purpose of the research (descriptive or comparative), type of samples (one or more groups), and data being collected (continuous or categorical). In this article, we describe some of the most frequent methods for calculating the sample size with examples from nuclear cardiology research, including for t tests, analysis of variance (ANOVA), non-parametric tests, correlation, Chi-squared tests, and survival analysis. For the ease of implementation, several examples are also illustrated via user-friendly free statistical software.

  6. Lexical development in Korean: vocabulary size, lexical composition, and late talking.

    PubMed

    Rescorla, Leslie; Lee, Youn Mi Cathy; Lee, Youn Min Cathy; Oh, Kyung Ja; Kim, Young Ah

    2013-04-01

    In this study, the authors aimed to compare vocabulary size, lexical composition, and late talking in large samples of Korean and U.S. children ages 18-35 months. Data for 2,191 Korean children (211 children recruited "offline" through preschools, and 1,980 recruited "online" via the Internet) and 274 U.S. children were obtained using the Language Development Survey (LDS). Mean vocabulary size was slightly larger in the offline than the online group, but the groups were acquiring almost identical words. Mean vocabulary size did not differ by country; girls and older children had larger vocabularies in both countries. The Korean-U.S. Q correlations for percentage use of LDS words (.53 and .56) indicated considerable concordance across countries in lexical composition. Noun dominance was as large in Korean lexicons as in U.S. lexicons. About half of the most commonly reported words for the Korean and U.S. children were identical. Lexicons of late talkers resembled those of typically developing younger children in the same sample. Despite linguistic and discourse differences between Korean and English, LDS findings indicated considerable cross-linguistic similarity with respect to vocabulary size, lexical composition, and late talking.

  7. Optimal sample sizes for the design of reliability studies: power consideration.

    PubMed

    Shieh, Gwowen

    2014-09-01

    Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.

  8. 77 FR 26292 - Risk Evaluation and Mitigation Strategy Assessments: Social Science Methodologies to Assess Goals...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-03

    ... determine endpoints; questionnaire design and analyses; and presentation of survey results. To date, FDA has..., the workshop will invest considerable time in identifying best methodological practices for conducting... sample, sample size, question design, process, and endpoints. Panel 2 will focus on alternatives to...

  9. Rasch fit statistics and sample size considerations for polytomous data.

    PubMed

    Smith, Adam B; Rush, Robert; Fallowfield, Lesley J; Velikova, Galina; Sharpe, Michael

    2008-05-29

    Previous research on educational data has demonstrated that Rasch fit statistics (mean squares and t-statistics) are highly susceptible to sample size variation for dichotomously scored rating data, although little is known about this relationship for polytomous data. These statistics help inform researchers about how well items fit to a unidimensional latent trait, and are an important adjunct to modern psychometrics. Given the increasing use of Rasch models in health research the purpose of this study was therefore to explore the relationship between fit statistics and sample size for polytomous data. Data were collated from a heterogeneous sample of cancer patients (n = 4072) who had completed both the Patient Health Questionnaire - 9 and the Hospital Anxiety and Depression Scale. Ten samples were drawn with replacement for each of eight sample sizes (n = 25 to n = 3200). The Rating and Partial Credit Models were applied and the mean square and t-fit statistics (infit/outfit) derived for each model. The results demonstrated that t-statistics were highly sensitive to sample size, whereas mean square statistics remained relatively stable for polytomous data. It was concluded that mean square statistics were relatively independent of sample size for polytomous data and that misfit to the model could be identified using published recommended ranges.

  10. Rasch fit statistics and sample size considerations for polytomous data

    PubMed Central

    Smith, Adam B; Rush, Robert; Fallowfield, Lesley J; Velikova, Galina; Sharpe, Michael

    2008-01-01

    Background Previous research on educational data has demonstrated that Rasch fit statistics (mean squares and t-statistics) are highly susceptible to sample size variation for dichotomously scored rating data, although little is known about this relationship for polytomous data. These statistics help inform researchers about how well items fit to a unidimensional latent trait, and are an important adjunct to modern psychometrics. Given the increasing use of Rasch models in health research the purpose of this study was therefore to explore the relationship between fit statistics and sample size for polytomous data. Methods Data were collated from a heterogeneous sample of cancer patients (n = 4072) who had completed both the Patient Health Questionnaire – 9 and the Hospital Anxiety and Depression Scale. Ten samples were drawn with replacement for each of eight sample sizes (n = 25 to n = 3200). The Rating and Partial Credit Models were applied and the mean square and t-fit statistics (infit/outfit) derived for each model. Results The results demonstrated that t-statistics were highly sensitive to sample size, whereas mean square statistics remained relatively stable for polytomous data. Conclusion It was concluded that mean square statistics were relatively independent of sample size for polytomous data and that misfit to the model could be identified using published recommended ranges. PMID:18510722

  11. Determination of hydrogen abundance in selected lunar soils

    NASA Technical Reports Server (NTRS)

    Bustin, Roberta

    1987-01-01

    Hydrogen was implanted in lunar soil through solar wind activity. In order to determine the feasibility of utilizing this solar wind hydrogen, it is necessary to know not only hydrogen abundances in bulk soils from a variety of locations but also the distribution of hydrogen within a given soil. Hydrogen distribution in bulk soils, grain size separates, mineral types, and core samples was investigated. Hydrogen was found in all samples studied. The amount varied considerably, depending on soil maturity, mineral types present, grain size distribution, and depth. Hydrogen implantation is definitely a surface phenomenon. However, as constructional particles are formed, previously exposed surfaces become embedded within particles, causing an enrichment of hydrogen in these species. In view of possibly extracting the hydrogen for use on the lunar surface, it is encouraging to know that hydrogen is present to a considerable depth and not only in the upper few millimeters. Based on these preliminary studies, extraction of solar wind hydrogen from lunar soil appears feasible, particulary if some kind of grain size separation is possible.

  12. Sample size considerations for paired experimental design with incomplete observations of continuous outcomes.

    PubMed

    Zhu, Hong; Xu, Xiaohan; Ahn, Chul

    2017-01-01

    Paired experimental design is widely used in clinical and health behavioral studies, where each study unit contributes a pair of observations. Investigators often encounter incomplete observations of paired outcomes in the data collected. Some study units contribute complete pairs of observations, while the others contribute either pre- or post-intervention observations. Statistical inference for paired experimental design with incomplete observations of continuous outcomes has been extensively studied in literature. However, sample size method for such study design is sparsely available. We derive a closed-form sample size formula based on the generalized estimating equation approach by treating the incomplete observations as missing data in a linear model. The proposed method properly accounts for the impact of mixed structure of observed data: a combination of paired and unpaired outcomes. The sample size formula is flexible to accommodate different missing patterns, magnitude of missingness, and correlation parameter values. We demonstrate that under complete observations, the proposed generalized estimating equation sample size estimate is the same as that based on the paired t-test. In the presence of missing data, the proposed method would lead to a more accurate sample size estimate comparing with the crude adjustment. Simulation studies are conducted to evaluate the finite-sample performance of the generalized estimating equation sample size formula. A real application example is presented for illustration.

  13. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments

    PubMed Central

    2013-01-01

    Background Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. Results To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations. The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. Conclusions We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs. PMID:24160725

  14. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    PubMed

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  15. Factors to Consider in Designing Aerosol Inlet Systems for Engine Exhaust Plume Sampling

    NASA Technical Reports Server (NTRS)

    Anderson, Bruce

    2004-01-01

    This document consists of viewgraphs of charts and diagrams of considerations to take when sampling the engine exhaust plume. It includes a chart that compares the emissions from various fuels, a diagram and charts of the various processes and conditions that influence the particulate size and concentration,

  16. How large a training set is needed to develop a classifier for microarray data?

    PubMed

    Dobbin, Kevin K; Zhao, Yingdong; Simon, Richard M

    2008-01-01

    A common goal of gene expression microarray studies is the development of a classifier that can be used to divide patients into groups with different prognoses, or with different expected responses to a therapy. These types of classifiers are developed on a training set, which is the set of samples used to train a classifier. The question of how many samples are needed in the training set to produce a good classifier from high-dimensional microarray data is challenging. We present a model-based approach to determining the sample size required to adequately train a classifier. It is shown that sample size can be determined from three quantities: standardized fold change, class prevalence, and number of genes or features on the arrays. Numerous examples and important experimental design issues are discussed. The method is adapted to address ex post facto determination of whether the size of a training set used to develop a classifier was adequate. An interactive web site for performing the sample size calculations is provided. We showed that sample size calculations for classifier development from high-dimensional microarray data are feasible, discussed numerous important considerations, and presented examples.

  17. Geochemical and radiological characterization of soils from former radium processing sites

    USGS Publications Warehouse

    Landa, E.R.

    1984-01-01

    Soil samples were collected from former radium processing sites in Denver, CO, and East Orange, NJ. Particle-size separations and radiochemical analyses of selected samples showed that while the greatest contents of both 226Ra and U were generally found in the finest (< 45 ??m) fraction, the pattern was not always of progressive increase in radionuclide content with decreasing particle size. Leaching tests on these samples showed a large portion of the 225Ra and U to be soluble in dilute hydrochloric acid. Radon-emanation coefficients measured for bulk samples of contaminated soil were about 20%. Recovery of residual uranium and vanadium, as an adjunct to any remedial action program, appears unlikely due to economic considerations.

  18. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    PubMed

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  19. No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.

    PubMed

    van Smeden, Maarten; de Groot, Joris A H; Moons, Karel G M; Collins, Gary S; Altman, Douglas G; Eijkemans, Marinus J C; Reitsma, Johannes B

    2016-11-24

    Ten events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies. The current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth's correction, are compared. The results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect ('separation'). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth's correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation. The current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.

  20. Multilevel factorial experiments for developing behavioral interventions: power, sample size, and resource considerations.

    PubMed

    Dziak, John J; Nahum-Shani, Inbal; Collins, Linda M

    2012-06-01

    Factorial experimental designs have many potential advantages for behavioral scientists. For example, such designs may be useful in building more potent interventions by helping investigators to screen several candidate intervention components simultaneously and to decide which are likely to offer greater benefit before evaluating the intervention as a whole. However, sample size and power considerations may challenge investigators attempting to apply such designs, especially when the population of interest is multilevel (e.g., when students are nested within schools, or when employees are nested within organizations). In this article, we examine the feasibility of factorial experimental designs with multiple factors in a multilevel, clustered setting (i.e., of multilevel, multifactor experiments). We conduct Monte Carlo simulations to demonstrate how design elements-such as the number of clusters, the number of lower-level units, and the intraclass correlation-affect power. Our results suggest that multilevel, multifactor experiments are feasible for factor-screening purposes because of the economical properties of complete and fractional factorial experimental designs. We also discuss resources for sample size planning and power estimation for multilevel factorial experiments. These results are discussed from a resource management perspective, in which the goal is to choose a design that maximizes the scientific benefit using the resources available for an investigation. (c) 2012 APA, all rights reserved

  1. Geochemical and radiological characterization of soils from former radium processing sites.

    PubMed

    Landa, E R

    1984-02-01

    Soil samples were collected from former radium processing sites in Denver, CO, and East Orange, NJ. Particle-size separations and radiochemical analyses of selected samples showed that while the greatest contents of both 226Ra and U were generally found in the finest (less than 45 micron) fraction, the pattern was not always of progressive increase in radionuclide content with decreasing particle size. Leaching tests on these samples showed a large portion of the 226Ra and U to be soluble in dilute hydrochloric acid. Radon-emanation coefficients measured for bulk samples of contaminated soil were about 20%. Recovery of residual uranium and vanadium, as an adjunct to any remedial action program, appears unlikely due to economic considerations.

  2. Improving the Selection, Classification, and Utilization of Army Enlisted Personnel. Project A: Research Plan

    DTIC Science & Technology

    1983-05-01

    occur. 4) It is also true that during a given time period, at a given base, not all of the people in the sample will actually be available for testing...taken sample sizes into consideration, we currently estimate that with few exceptions, we will have adequate samples to perform the analysis of simple ...aalanced Half Sample Repli- cations (BHSA). His analyses of simple cases have shown that this method is substantially more efficient than the

  3. A multi-stage drop-the-losers design for multi-arm clinical trials.

    PubMed

    Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher

    2017-02-01

    Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.

  4. The Consideration of Future Consequences and Health Behaviour: A Meta-Analysis.

    PubMed

    Murphy, Lisa; Dockray, Samantha

    2018-06-14

    The aim of this meta-analysis was to quantify the direction and strength of associations between the Consideration of Future Consequences (CFC) scale and intended and actual engagement in three categories of health-related behaviour: health risk, health promotive, and illness preventative/detective behaviour. A systematic literature search was conducted to identify studies that measured CFC and health behaviour. In total, sixty-four effect sizes were extracted from 53 independent samples. Effect sizes were synthesised using a random-effects model. Aggregate effect sizes for all behaviour categories were significant, albeit small in magnitude. There were no significant moderating effects of the length of CFC scale (long vs. short), population type (college students vs. non-college students), mean age, or sex proportion of study samples. CFC reliability and study quality score significantly moderated the overall association between CFC and health risk behaviour only. The magnitude of effect sizes is comparable to associations between health behaviour and other individual difference variables, such as the Big Five personality traits. The findings indicate that CFC is an important construct to consider in research on engagement in health risk behaviour in particular. Future research is needed to examine the optimal approach by which to apply the findings to behavioural interventions.

  5. Data splitting for artificial neural networks using SOM-based stratified sampling.

    PubMed

    May, R J; Maier, H R; Dandy, G C

    2010-03-01

    Data splitting is an important consideration during artificial neural network (ANN) development where hold-out cross-validation is commonly employed to ensure generalization. Even for a moderate sample size, the sampling methodology used for data splitting can have a significant effect on the quality of the subsets used for training, testing and validating an ANN. Poor data splitting can result in inaccurate and highly variable model performance; however, the choice of sampling methodology is rarely given due consideration by ANN modellers. Increased confidence in the sampling is of paramount importance, since the hold-out sampling is generally performed only once during ANN development. This paper considers the variability in the quality of subsets that are obtained using different data splitting approaches. A novel approach to stratified sampling, based on Neyman sampling of the self-organizing map (SOM), is developed, with several guidelines identified for setting the SOM size and sample allocation in order to minimize the bias and variance in the datasets. Using an example ANN function approximation task, the SOM-based approach is evaluated in comparison to random sampling, DUPLEX, systematic stratified sampling, and trial-and-error sampling to minimize the statistical differences between data sets. Of these approaches, DUPLEX is found to provide benchmark performance with good model performance, with no variability. The results show that the SOM-based approach also reliably generates high-quality samples and can therefore be used with greater confidence than other approaches, especially in the case of non-uniform datasets, with the benefit of scalability to perform data splitting on large datasets. Copyright 2009 Elsevier Ltd. All rights reserved.

  6. An imbalance in cluster sizes does not lead to notable loss of power in cross-sectional, stepped-wedge cluster randomised trials with a continuous outcome.

    PubMed

    Kristunas, Caroline A; Smith, Karen L; Gray, Laura J

    2017-03-07

    The current methodology for sample size calculations for stepped-wedge cluster randomised trials (SW-CRTs) is based on the assumption of equal cluster sizes. However, as is often the case in cluster randomised trials (CRTs), the clusters in SW-CRTs are likely to vary in size, which in other designs of CRT leads to a reduction in power. The effect of an imbalance in cluster size on the power of SW-CRTs has not previously been reported, nor what an appropriate adjustment to the sample size calculation should be to allow for any imbalance. We aimed to assess the impact of an imbalance in cluster size on the power of a cross-sectional SW-CRT and recommend a method for calculating the sample size of a SW-CRT when there is an imbalance in cluster size. The effect of varying degrees of imbalance in cluster size on the power of SW-CRTs was investigated using simulations. The sample size was calculated using both the standard method and two proposed adjusted design effects (DEs), based on those suggested for CRTs with unequal cluster sizes. The data were analysed using generalised estimating equations with an exchangeable correlation matrix and robust standard errors. An imbalance in cluster size was not found to have a notable effect on the power of SW-CRTs. The two proposed adjusted DEs resulted in trials that were generally considerably over-powered. We recommend that the standard method of sample size calculation for SW-CRTs be used, provided that the assumptions of the method hold. However, it would be beneficial to investigate, through simulation, what effect the maximum likely amount of inequality in cluster sizes would be on the power of the trial and whether any inflation of the sample size would be required.

  7. Blinded and unblinded internal pilot study designs for clinical trials with count data.

    PubMed

    Schneider, Simon; Schmidli, Heinz; Friede, Tim

    2013-07-01

    Internal pilot studies are a popular design feature to address uncertainties in the sample size calculations caused by vague information on nuisance parameters. Despite their popularity, only very recently blinded sample size reestimation procedures for trials with count data were proposed and their properties systematically investigated. Although blinded procedures are favored by regulatory authorities, practical application is somewhat limited by fears that blinded procedures are prone to bias if the treatment effect was misspecified in the planning. Here, we compare unblinded and blinded procedures with respect to bias, error rates, and sample size distribution. We find that both procedures maintain the desired power and that the unblinded procedure is slightly liberal whereas the actual significance level of the blinded procedure is close to the nominal level. Furthermore, we show that in situations where uncertainty about the assumed treatment effect exists, the blinded estimator of the control event rate is biased in contrast to the unblinded estimator, which results in differences in mean sample sizes in favor of the unblinded procedure. However, these differences are rather small compared to the deviations of the mean sample sizes from the sample size required to detect the true, but unknown effect. We demonstrate that the variation of the sample size resulting from the blinded procedure is in many practically relevant situations considerably smaller than the one of the unblinded procedures. The methods are extended to overdispersed counts using a quasi-likelihood approach and are illustrated by trials in relapsing multiple sclerosis. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Population-Based Resequencing of Experimentally Evolved Populations Reveals the Genetic Basis of Body Size Variation in Drosophila melanogaster

    PubMed Central

    Turner, Thomas L.; Stewart, Andrew D.; Fields, Andrew T.; Rice, William R.; Tarone, Aaron M.

    2011-01-01

    Body size is a classic quantitative trait with evolutionarily significant variation within many species. Locating the alleles responsible for this variation would help understand the maintenance of variation in body size in particular, as well as quantitative traits in general. However, successful genome-wide association of genotype and phenotype may require very large sample sizes if alleles have low population frequencies or modest effects. As a complementary approach, we propose that population-based resequencing of experimentally evolved populations allows for considerable power to map functional variation. Here, we use this technique to investigate the genetic basis of natural variation in body size in Drosophila melanogaster. Significant differentiation of hundreds of loci in replicate selection populations supports the hypothesis that the genetic basis of body size variation is very polygenic in D. melanogaster. Significantly differentiated variants are limited to single genes at some loci, allowing precise hypotheses to be formed regarding causal polymorphisms, while other significant regions are large and contain many genes. By using significantly associated polymorphisms as a priori candidates in follow-up studies, these data are expected to provide considerable power to determine the genetic basis of natural variation in body size. PMID:21437274

  9. Optimal design in pediatric pharmacokinetic and pharmacodynamic clinical studies.

    PubMed

    Roberts, Jessica K; Stockmann, Chris; Balch, Alfred; Yu, Tian; Ward, Robert M; Spigarelli, Michael G; Sherwin, Catherine M T

    2015-03-01

    It is not trivial to conduct clinical trials with pediatric participants. Ethical, logistical, and financial considerations add to the complexity of pediatric studies. Optimal design theory allows investigators the opportunity to apply mathematical optimization algorithms to define how to structure their data collection to answer focused research questions. These techniques can be used to determine an optimal sample size, optimal sample times, and the number of samples required for pharmacokinetic and pharmacodynamic studies. The aim of this review is to demonstrate how to determine optimal sample size, optimal sample times, and the number of samples required from each patient by presenting specific examples using optimal design tools. Additionally, this review aims to discuss the relative usefulness of sparse vs rich data. This review is intended to educate the clinician, as well as the basic research scientist, whom plan on conducting a pharmacokinetic/pharmacodynamic clinical trial in pediatric patients. © 2015 John Wiley & Sons Ltd.

  10. Sampling strategies for radio-tracking coyotes

    USGS Publications Warehouse

    Smith, G.J.; Cary, J.R.; Rongstad, O.J.

    1981-01-01

    Ten coyotes radio-tracked for 24 h periods were most active at night and moved little during daylight hours. Home-range size determined from radio-locations of 3 adult coyotes increased with the number of locations until an asymptote was reached at about 35-40 independent day locations or 3 6 nights of hourly radio-locations. Activity of the coyote did not affect the asymptotic nature of the home-range calculations, but home-range sizes determined from more than 3 nights of hourly locations were considerably larger than home-range sizes determined from daylight locations. Coyote home-range sizes were calculated from daylight locations, full-night tracking periods, and half-night tracking periods. Full- and half-lnight sampling strategies involved obtaining hourly radio-locations during 12 and 6 h periods, respectively. The half-night sampling strategy was the best compromise for our needs, as it adequately indexed the home-range size, reduced time and energy spent, and standardized the area calculation without requiring the researcher to become completely nocturnal. Sight tracking also provided information about coyote activity and sociability.

  11. Statistical considerations in evaluating pharmacogenomics-based clinical effect for confirmatory trials.

    PubMed

    Wang, Sue-Jane; O'Neill, Robert T; Hung, Hm James

    2010-10-01

    The current practice for seeking genomically favorable patients in randomized controlled clinical trials using genomic convenience samples. To discuss the extent of imbalance, confounding, bias, design efficiency loss, type I error, and type II error that can occur in the evaluation of the convenience samples, particularly when they are small samples. To articulate statistical considerations for a reasonable sample size to minimize the chance of imbalance, and, to highlight the importance of replicating the subgroup finding in independent studies. Four case examples reflecting recent regulatory experiences are used to underscore the problems with convenience samples. Probability of imbalance for a pre-specified subgroup is provided to elucidate sample size needed to minimize the chance of imbalance. We use an example drug development to highlight the level of scientific rigor needed, with evidence replicated for a pre-specified subgroup claim. The convenience samples evaluated ranged from 18% to 38% of the intent-to-treat samples with sample size ranging from 100 to 5000 patients per arm. The baseline imbalance can occur with probability higher than 25%. Mild to moderate multiple confounders yielding the same directional bias in favor of the treated group can make treatment group incomparable at baseline and result in a false positive conclusion that there is a treatment difference. Conversely, if the same directional bias favors the placebo group or there is loss in design efficiency, the type II error can increase substantially. Pre-specification of a genomic subgroup hypothesis is useful only for some degree of type I error control. Complete ascertainment of genomic samples in a randomized controlled trial should be the first step to explore if a favorable genomic patient subgroup suggests a treatment effect when there is no clear prior knowledge and understanding about how the mechanism of a drug target affects the clinical outcome of interest. When stratified randomization based on genomic biomarker status cannot be implemented in designing a pharmacogenomics confirmatory clinical trial, if there is one genomic biomarker prognostic for clinical response, as a general rule of thumb, a sample size of at least 100 patients may be needed to be considered for the lower prevalence genomic subgroup to minimize the chance of an imbalance of 20% or more difference in the prevalence of the genomic marker. The sample size may need to be at least 150, 350, and 1350, respectively, if an imbalance of 15%, 10% and 5% difference is of concern.

  12. Evidence for Electromagnetic Granularity in the Polycrystalline Iron-Based Superconductor LaO(0.89)F(0.11)FeAs

    DTIC Science & Technology

    2008-01-01

    oriented grain-boundaries. In this work we show considerable evidence for such weak-coupling by study of the dependence of magnetization in bulk and...powdered samples. Bulk sample magnetization curves show very little hysteresis while remanent magnetization shows almost no sample size dependence...K Fig. 2 (Color online) Magnetization hysteresis loops at 5 and 20 K for the bulk LaO0.89F0.11FeAs. Inset shows the temperature dependence of

  13. Biostatistics Series Module 5: Determining Sample Size

    PubMed Central

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437

  14. Statistical theory and methodology for remote sensing data analysis

    NASA Technical Reports Server (NTRS)

    Odell, P. L.

    1974-01-01

    A model is developed for the evaluation of acreages (proportions) of different crop-types over a geographical area using a classification approach and methods for estimating the crop acreages are given. In estimating the acreages of a specific croptype such as wheat, it is suggested to treat the problem as a two-crop problem: wheat vs. nonwheat, since this simplifies the estimation problem considerably. The error analysis and the sample size problem is investigated for the two-crop approach. Certain numerical results for sample sizes are given for a JSC-ERTS-1 data example on wheat identification performance in Hill County, Montana and Burke County, North Dakota. Lastly, for a large area crop acreages inventory a sampling scheme is suggested for acquiring sample data and the problem of crop acreage estimation and the error analysis is discussed.

  15. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications

    PubMed Central

    Chaibub Neto, Elias

    2015-01-01

    In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson’s sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling. PMID:26125965

  16. Researchers’ Intuitions About Power in Psychological Research

    PubMed Central

    Bakker, Marjan; Hartgerink, Chris H. J.; Wicherts, Jelte M.; van der Maas, Han L. J.

    2016-01-01

    Many psychology studies are statistically underpowered. In part, this may be because many researchers rely on intuition, rules of thumb, and prior practice (along with practical considerations) to determine the number of subjects to test. In Study 1, we surveyed 291 published research psychologists and found large discrepancies between their reports of their preferred amount of power and the actual power of their studies (calculated from their reported typical cell size, typical effect size, and acceptable alpha). Furthermore, in Study 2, 89% of the 214 respondents overestimated the power of specific research designs with a small expected effect size, and 95% underestimated the sample size needed to obtain .80 power for detecting a small effect. Neither researchers’ experience nor their knowledge predicted the bias in their self-reported power intuitions. Because many respondents reported that they based their sample sizes on rules of thumb or common practice in the field, we recommend that researchers conduct and report formal power analyses for their studies. PMID:27354203

  17. Researchers' Intuitions About Power in Psychological Research.

    PubMed

    Bakker, Marjan; Hartgerink, Chris H J; Wicherts, Jelte M; van der Maas, Han L J

    2016-08-01

    Many psychology studies are statistically underpowered. In part, this may be because many researchers rely on intuition, rules of thumb, and prior practice (along with practical considerations) to determine the number of subjects to test. In Study 1, we surveyed 291 published research psychologists and found large discrepancies between their reports of their preferred amount of power and the actual power of their studies (calculated from their reported typical cell size, typical effect size, and acceptable alpha). Furthermore, in Study 2, 89% of the 214 respondents overestimated the power of specific research designs with a small expected effect size, and 95% underestimated the sample size needed to obtain .80 power for detecting a small effect. Neither researchers' experience nor their knowledge predicted the bias in their self-reported power intuitions. Because many respondents reported that they based their sample sizes on rules of thumb or common practice in the field, we recommend that researchers conduct and report formal power analyses for their studies. © The Author(s) 2016.

  18. Development of a copula-based particle filter (CopPF) approach for hydrologic data assimilation under consideration of parameter interdependence

    NASA Astrophysics Data System (ADS)

    Fan, Y. R.; Huang, G. H.; Baetz, B. W.; Li, Y. P.; Huang, K.

    2017-06-01

    In this study, a copula-based particle filter (CopPF) approach was developed for sequential hydrological data assimilation by considering parameter correlation structures. In CopPF, multivariate copulas are proposed to reflect parameter interdependence before the resampling procedure with new particles then being sampled from the obtained copulas. Such a process can overcome both particle degeneration and sample impoverishment. The applicability of CopPF is illustrated with three case studies using a two-parameter simplified model and two conceptual hydrologic models. The results for the simplified model indicate that model parameters are highly correlated in the data assimilation process, suggesting a demand for full description of their dependence structure. Synthetic experiments on hydrologic data assimilation indicate that CopPF can rejuvenate particle evolution in large spaces and thus achieve good performances with low sample size scenarios. The applicability of CopPF is further illustrated through two real-case studies. It is shown that, compared with traditional particle filter (PF) and particle Markov chain Monte Carlo (PMCMC) approaches, the proposed method can provide more accurate results for both deterministic and probabilistic prediction with a sample size of 100. Furthermore, the sample size would not significantly influence the performance of CopPF. Also, the copula resampling approach dominates parameter evolution in CopPF, with more than 50% of particles sampled by copulas in most sample size scenarios.

  19. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

    PubMed Central

    Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.

    2014-01-01

    Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence. PMID:24694150

  20. Samples in applied psychology: over a decade of research in review.

    PubMed

    Shen, Winny; Kiger, Thomas B; Davies, Stacy E; Rasch, Rena L; Simon, Kara M; Ones, Deniz S

    2011-09-01

    This study examines sample characteristics of articles published in Journal of Applied Psychology (JAP) from 1995 to 2008. At the individual level, the overall median sample size over the period examined was approximately 173, which is generally adequate for detecting the average magnitude of effects of primary interest to researchers who publish in JAP. Samples using higher units of analyses (e.g., teams, departments/work units, and organizations) had lower median sample sizes (Mdn ≈ 65), yet were arguably robust given typical multilevel design choices of JAP authors despite the practical constraints of collecting data at higher units of analysis. A substantial proportion of studies used student samples (~40%); surprisingly, median sample sizes for student samples were smaller than working adult samples. Samples were more commonly occupationally homogeneous (~70%) than occupationally heterogeneous. U.S. and English-speaking participants made up the vast majority of samples, whereas Middle Eastern, African, and Latin American samples were largely unrepresented. On the basis of study results, recommendations are provided for authors, editors, and readers, which converge on 3 themes: (a) appropriateness and match between sample characteristics and research questions, (b) careful consideration of statistical power, and (c) the increased popularity of quantitative synthesis. Implications are discussed in terms of theory building, generalizability of research findings, and statistical power to detect effects. PsycINFO Database Record (c) 2011 APA, all rights reserved

  1. Spatial sampling considerations of the CERES (Clouds and Earth Radiant Energy System) instrument

    NASA Astrophysics Data System (ADS)

    Smith, G. L.; Manalo-Smith, Natividdad; Priestley, Kory

    2014-10-01

    The CERES (Clouds and Earth Radiant Energy System) instrument is a scanning radiometer with three channels for measuring Earth radiation budget. At present CERES models are operating aboard the Terra, Aqua and Suomi/NPP spacecraft and flights of CERES instruments are planned for the JPSS-1 spacecraft and its successors. CERES scans from one limb of the Earth to the other and back. The footprint size grows with distance from nadir simply due to geometry so that the size of the smallest features which can be resolved from the data increases and spatial sampling errors increase with nadir angle. This paper presents an analysis of the effect of nadir angle on spatial sampling errors of the CERES instrument. The analysis performed in the Fourier domain. Spatial sampling errors are created by smoothing of features which are the size of the footprint and smaller, or blurring, and inadequate sampling, that causes aliasing errors. These spatial sampling errors are computed in terms of the system transfer function, which is the Fourier transform of the point response function, the spacing of data points and the spatial spectrum of the radiance field.

  2. Declustering of clustered preferential sampling for histogram and semivariogram inference

    USGS Publications Warehouse

    Olea, R.A.

    2007-01-01

    Measurements of attributes obtained more as a consequence of business ventures than sampling design frequently result in samplings that are preferential both in location and value, typically in the form of clusters along the pay. Preferential sampling requires preprocessing for the purpose of properly inferring characteristics of the parent population, such as the cumulative distribution and the semivariogram. Consideration of the distance to the nearest neighbor allows preparation of resampled sets that produce comparable results to those from previously proposed methods. Clustered sampling of size 140, taken from an exhaustive sampling, is employed to illustrate this approach. ?? International Association for Mathematical Geology 2007.

  3. Sample size and power considerations in network meta-analysis

    PubMed Central

    2012-01-01

    Background Network meta-analysis is becoming increasingly popular for establishing comparative effectiveness among multiple interventions for the same disease. Network meta-analysis inherits all methodological challenges of standard pairwise meta-analysis, but with increased complexity due to the multitude of intervention comparisons. One issue that is now widely recognized in pairwise meta-analysis is the issue of sample size and statistical power. This issue, however, has so far only received little attention in network meta-analysis. To date, no approaches have been proposed for evaluating the adequacy of the sample size, and thus power, in a treatment network. Findings In this article, we develop easy-to-use flexible methods for estimating the ‘effective sample size’ in indirect comparison meta-analysis and network meta-analysis. The effective sample size for a particular treatment comparison can be interpreted as the number of patients in a pairwise meta-analysis that would provide the same degree and strength of evidence as that which is provided in the indirect comparison or network meta-analysis. We further develop methods for retrospectively estimating the statistical power for each comparison in a network meta-analysis. We illustrate the performance of the proposed methods for estimating effective sample size and statistical power using data from a network meta-analysis on interventions for smoking cessation including over 100 trials. Conclusion The proposed methods are easy to use and will be of high value to regulatory agencies and decision makers who must assess the strength of the evidence supporting comparative effectiveness estimates. PMID:22992327

  4. Mass size distribution of particle-bound water

    NASA Astrophysics Data System (ADS)

    Canepari, S.; Simonetti, G.; Perrino, C.

    2017-09-01

    The thermal-ramp Karl-Fisher method (tr-KF) for the determination of PM-bound water has been applied to size-segregated PM samples collected in areas subjected to different environmental conditions (protracted atmospheric stability, desert dust intrusion, urban atmosphere). This method, based on the use of a thermal ramp for the desorption of water from PM samples and the subsequent analysis by the coulometric KF technique, had been previously shown to differentiate water contributes retained with different strength and associated to different chemical components in the atmospheric aerosol. The application of the method to size-segregated samples has revealed that water showed a typical mass size distribution in each one of the three environmental situations that were taken into consideration. A very similar size distribution was shown by the chemical PM components that prevailed during each event: ammonium nitrate in the case of atmospheric stability, crustal species in the case of desert dust, road-dust components in the case of urban sites. The shape of the tr-KF curve varied according to the size of the collected particles. Considering the size ranges that better characterize the event (fine fraction for atmospheric stability, coarse fraction for dust intrusion, bi-modal distribution for urban dust), this shape is coherent with the typical tr-KF shape shown by water bound to the chemical species that predominate in the same PM size range (ammonium nitrate, crustal species, secondary/combustion species - road dust components).

  5. Embedding clinical interventions into observational studies

    PubMed Central

    Newman, Anne B.; Avilés-Santa, M. Larissa; Anderson, Garnet; Heiss, Gerardo; Howard, Wm. James; Krucoff, Mitchell; Kuller, Lewis H.; Lewis, Cora E.; Robinson, Jennifer G.; Taylor, Herman; Treviño, Roberto P.; Weintraub, William

    2017-01-01

    Novel approaches to observational studies and clinical trials could improve the cost-effectiveness and speed of translation of research. Hybrid designs that combine elements of clinical trials with observational registries or cohort studies should be considered as part of a long-term strategy to transform clinical trials and epidemiology, adapting to the opportunities of big data and the challenges of constrained budgets. Important considerations include study aims, timing, breadth and depth of the existing infrastructure that can be leveraged, participant burden, likely participation rate and available sample size in the cohort, required sample size for the trial, and investigator expertise. Community engagement and stakeholder (including study participants) support are essential for these efforts to succeed. PMID:26611435

  6. The size of a pilot study for a clinical trial should be calculated in relation to considerations of precision and efficiency.

    PubMed

    Sim, Julius; Lewis, Martyn

    2012-03-01

    To investigate methods to determine the size of a pilot study to inform a power calculation for a randomized controlled trial (RCT) using an interval/ratio outcome measure. Calculations based on confidence intervals (CIs) for the sample standard deviation (SD). Based on CIs for the sample SD, methods are demonstrated whereby (1) the observed SD can be adjusted to secure the desired level of statistical power in the main study with a specified level of confidence; (2) the sample for the main study, if calculated using the observed SD, can be adjusted, again to obtain the desired level of statistical power in the main study; (3) the power of the main study can be calculated for the situation in which the SD in the pilot study proves to be an underestimate of the true SD; and (4) an "efficient" pilot size can be determined to minimize the combined size of the pilot and main RCT. Trialists should calculate the appropriate size of a pilot study, just as they should the size of the main RCT, taking into account the twin needs to demonstrate efficiency in terms of recruitment and to produce precise estimates of treatment effect. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Digital simulation of scalar optical diffraction: revisiting chirp function sampling criteria and consequences.

    PubMed

    Voelz, David G; Roggemann, Michael C

    2009-11-10

    Accurate simulation of scalar optical diffraction requires consideration of the sampling requirement for the phase chirp function that appears in the Fresnel diffraction expression. We describe three sampling regimes for FFT-based propagation approaches: ideally sampled, oversampled, and undersampled. Ideal sampling, where the chirp and its FFT both have values that match analytic chirp expressions, usually provides the most accurate results but can be difficult to realize in practical simulations. Under- or oversampling leads to a reduction in the available source plane support size, the available source bandwidth, or the available observation support size, depending on the approach and simulation scenario. We discuss three Fresnel propagation approaches: the impulse response/transfer function (angular spectrum) method, the single FFT (direct) method, and the two-step method. With illustrations and simulation examples we show the form of the sampled chirp functions and their discrete transforms, common relationships between the three methods under ideal sampling conditions, and define conditions and consequences to be considered when using nonideal sampling. The analysis is extended to describe the sampling limitations for the more exact Rayleigh-Sommerfeld diffraction solution.

  8. Nintendo Wii Fit as an adjunct to physiotherapy following lower limb fractures: preliminary feasibility, safety and sample size considerations.

    PubMed

    McPhail, S M; O'Hara, M; Gane, E; Tonks, P; Bullock-Saxton, J; Kuys, S S

    2016-06-01

    The Nintendo Wii Fit integrates virtual gaming with body movement, and may be suitable as an adjunct to conventional physiotherapy following lower limb fractures. This study examined the feasibility and safety of using the Wii Fit as an adjunct to outpatient physiotherapy following lower limb fractures, and reports sample size considerations for an appropriately powered randomised trial. Ambulatory patients receiving physiotherapy following a lower limb fracture participated in this study (n=18). All participants received usual care (individual physiotherapy). The first nine participants also used the Wii Fit under the supervision of their treating clinician as an adjunct to usual care. Adverse events, fracture malunion or exacerbation of symptoms were recorded. Pain, balance and patient-reported function were assessed at baseline and discharge from physiotherapy. No adverse events were attributed to either the usual care physiotherapy or Wii Fit intervention for any patient. Overall, 15 (83%) participants completed both assessments and interventions as scheduled. For 80% power in a clinical trial, the number of complete datasets required in each group to detect a small, medium or large effect of the Wii Fit at a post-intervention assessment was calculated at 175, 63 and 25, respectively. The Nintendo Wii Fit was safe and feasible as an adjunct to ambulatory physiotherapy in this sample. When considering a likely small effect size and the 17% dropout rate observed in this study, 211 participants would be required in each clinical trial group. A larger effect size or multiple repeated measures design would require fewer participants. Copyright © 2015 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  9. Expectations and Support for Scholarly Activity in Schools of Business.

    ERIC Educational Resources Information Center

    Bohrer, Paul; Dolphin, Robert, Jr.

    1985-01-01

    Addresses issues relating to scholarship productivity and examines these issues with consideration given to the size and the accreditation status of the business schools sampled. First, how important is scholarly activity within an institution for a variety of personnel decisions? Second, what is the relative importance of various types of…

  10. DOSESCREEN: a computer program to aid dose placement

    Treesearch

    Kimberly C. Smith; Jacqueline L. Robertson

    1984-01-01

    Careful selection of an experimental design for a bioassay substantially improves the precision of effective dose (ED) estimates. Design considerations typically include determination of sample size, dose selection, and allocation of subjects to doses. DOSESCREEN is a computer program written to help investigators select an efficient design for the estimation of an...

  11. Critical considerations when planning experimental in vivo studies in dental traumatology.

    PubMed

    Andreasen, Jens O; Andersson, Lars

    2011-08-01

    In vivo studies are sometimes needed to understand healing processes after trauma. For several reasons, not the least ethical, such studies have to be carefully planned and important considerations have to be taken into account about suitability of the experimental model, sample size and optimizing the accuracy of the analysis. Several manuscripts of in vivo studies are submitted for publication to Dental Traumatology and rejected because of inadequate design, methodology or insufficient documentation of the results. The authors have substantial experience in experimental in vivo studies of tissue healing in dental traumatology and share their knowledge regarding critical considerations when planning experimental in vivo studies. © 2011 John Wiley & Sons A/S.

  12. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.

    2014-04-15

    Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample sizemore » required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence.« less

  13. Estimation after classification using lot quality assurance sampling: corrections for curtailed sampling with application to evaluating polio vaccination campaigns.

    PubMed

    Olives, Casey; Valadez, Joseph J; Pagano, Marcello

    2014-03-01

    To assess the bias incurred when curtailment of Lot Quality Assurance Sampling (LQAS) is ignored, to present unbiased estimators, to consider the impact of cluster sampling by simulation and to apply our method to published polio immunization data from Nigeria. We present estimators of coverage when using two kinds of curtailed LQAS strategies: semicurtailed and curtailed. We study the proposed estimators with independent and clustered data using three field-tested LQAS designs for assessing polio vaccination coverage, with samples of size 60 and decision rules of 9, 21 and 33, and compare them to biased maximum likelihood estimators. Lastly, we present estimates of polio vaccination coverage from previously published data in 20 local government authorities (LGAs) from five Nigerian states. Simulations illustrate substantial bias if one ignores the curtailed sampling design. Proposed estimators show no bias. Clustering does not affect the bias of these estimators. Across simulations, standard errors show signs of inflation as clustering increases. Neither sampling strategy nor LQAS design influences estimates of polio vaccination coverage in 20 Nigerian LGAs. When coverage is low, semicurtailed LQAS strategies considerably reduces the sample size required to make a decision. Curtailed LQAS designs further reduce the sample size when coverage is high. Results presented dispel the misconception that curtailed LQAS data are unsuitable for estimation. These findings augment the utility of LQAS as a tool for monitoring vaccination efforts by demonstrating that unbiased estimation using curtailed designs is not only possible but these designs also reduce the sample size. © 2014 John Wiley & Sons Ltd.

  14. High temperature microstructural stability and recrystallization mechanisms in 14YWT alloys

    DOE PAGES

    Aydogan, E.; El-Atwani, O.; Takajo, S.; ...

    2018-02-09

    In-situ neutron diffraction experiments were performed on room temperature compressed 14YWT nanostructured ferritic alloys at 1100°C and 1150°C to understand their thermally activated static recrystallization mechanisms. The existence of high density of Y-Ti-O rich nano-oxides (<5 nm) shift the recrystallization temperature up due to Zener pinning of the grain boundaries, making these materials attractive for high temperature applications. This study serves to quantify the texture evolution in-situ and understand the effect of particles on the recrystallization mechanisms in 14YWT alloys. We have shown, both experimentally and theoretically, that there is considerable recovery in the 20% compressed sample after 6.5 hmore » annealing at 1100°C while recrystallization occurs within an hour of annealing at 1100°C and 1150°C in the 60% compressed samples. Moreover, the 60% compressed samples show {112}<110> and {112}<111> texture components during annealing, in contrast to the conventional recrystallization textures in body centered cubic alloys. Furthermore, nano-oxide size, shape, density and distribution are considerably different in unrecrystallized and abnormally grown grains. Transmission electron microscopy analysis shows that oxide particles having a size between 5 and 30 nm play a critical role for recrystallization mechanisms in 14YWT nanostructured ferritic alloys.« less

  15. High temperature microstructural stability and recrystallization mechanisms in 14YWT alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aydogan, E.; El-Atwani, O.; Takajo, S.

    In-situ neutron diffraction experiments were performed on room temperature compressed 14YWT nanostructured ferritic alloys at 1100°C and 1150°C to understand their thermally activated static recrystallization mechanisms. The existence of high density of Y-Ti-O rich nano-oxides (<5 nm) shift the recrystallization temperature up due to Zener pinning of the grain boundaries, making these materials attractive for high temperature applications. This study serves to quantify the texture evolution in-situ and understand the effect of particles on the recrystallization mechanisms in 14YWT alloys. We have shown, both experimentally and theoretically, that there is considerable recovery in the 20% compressed sample after 6.5 hmore » annealing at 1100°C while recrystallization occurs within an hour of annealing at 1100°C and 1150°C in the 60% compressed samples. Moreover, the 60% compressed samples show {112}<110> and {112}<111> texture components during annealing, in contrast to the conventional recrystallization textures in body centered cubic alloys. Furthermore, nano-oxide size, shape, density and distribution are considerably different in unrecrystallized and abnormally grown grains. Transmission electron microscopy analysis shows that oxide particles having a size between 5 and 30 nm play a critical role for recrystallization mechanisms in 14YWT nanostructured ferritic alloys.« less

  16. Quality control considerations for size exclusion chromatography with online ICP-MS: a powerful tool for evaluating the size dependence of metal-organic matter complexation.

    PubMed

    McKenzie, Erica R; Young, Thomas M

    2013-01-01

    Size exclusion chromatography (SEC), which separates molecules based on molecular volume, can be coupled with online inductively coupled plasma mass spectrometry (ICP-MS) to explore size-dependent metal-natural organic matter (NOM) complexation. To make effective use of this analytical dual detector system, the operator should be mindful of quality control measures. Al, Cr, Fe, Se, and Sn all exhibited columnless attenuation, which indicated unintended interactions with system components. Based on signal-to-noise ratio and peak reproducibility between duplicate analyses of environmental samples, consistent peak time and height were observed for Mg, Cl, Mn, Cu, Br, and Pb. Al, V, Fe, Co, Ni, Zn, Se, Cd, Sn, and Sb were less consistent overall, but produced consistent measurements in select samples. Ultrafiltering and centrifuging produced similar peak distributions, but glass fiber filtration produced more high molecular weight (MW) peaks. Storage in glass also produced more high MW peaks than did plastic bottles.

  17. Instrumental neutron activation analysis for studying size-fractionated aerosols

    NASA Astrophysics Data System (ADS)

    Salma, Imre; Zemplén-Papp, Éva

    1999-10-01

    Instrumental neutron activation analysis (INAA) was utilized for studying aerosol samples collected into a coarse and a fine size fraction on Nuclepore polycarbonate membrane filters. As a result of the panoramic INAA, 49 elements were determined in an amount of about 200-400 μg of particulate matter by two irradiations and four γ-spectrometric measurements. The analytical calculations were performed by the absolute ( k0) standardization method. The calibration procedures, application protocol and the data evaluation process are described and discussed. They make it possible now to analyse a considerable number of samples, with assuring the quality of the results. As a means of demonstrating the system's analytical capabilities, the concentration ranges, median or mean atmospheric concentrations and detection limits are presented for an extensive series of aerosol samples collected within the framework of an urban air pollution study in Budapest. For most elements, the precision of the analysis was found to be beyond the uncertainty represented by the sampling techniques and sample variability.

  18. Sparse feature learning for instrument identification: Effects of sampling and pooling methods.

    PubMed

    Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu

    2016-05-01

    Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.

  19. A closer look at the size of the gaze-liking effect: a preregistered replication.

    PubMed

    Tipples, Jason; Pecchinenda, Anna

    2018-04-30

    This study is a direct replication of gaze-liking effect using the same design, stimuli and procedure. The gaze-liking effect describes the tendency for people to rate objects as more likeable when they have recently seen a person repeatedly gaze toward rather than away from the object. However, as subsequent studies show considerable variability in the size of this effect, we sampled a larger number of participants (N = 98) than the original study (N = 24) to gain a more precise estimate of the gaze-liking effect size. Our results indicate a much smaller standardised effect size (d z  = 0.02) than that of the original study (d z  = 0.94). Our smaller effect size was not due to general insensitivity to eye-gaze effects because the same sample showed a clear (d z  = 1.09) gaze-cuing effect - faster reaction times when eyes looked toward vs away from target objects. We discuss the implications of our findings for future studies wishing to study the gaze-liking effect.

  20. Randomness, Sample Size, Imagination and Metacognition: Making Judgments about Differences in Data Sets

    ERIC Educational Resources Information Center

    Stack, Sue; Watson, Jane

    2013-01-01

    There is considerable research on the difficulties students have in conceptualising individual concepts of probability and statistics (see for example, Bryant & Nunes, 2012; Jones, 2005). The unit of work developed for the action research project described in this article is specifically designed to address some of these in order to help…

  1. Measurement of tortuosity in aluminum foams using airborne ultrasound.

    PubMed

    Le, Lawrence H; Zhang, Chan; Ta, Dean; Lou, Edmond

    2010-01-01

    The slow compressional wave in air-saturated aluminum foams was studied by means of ultrasonic transverse transmission method over a frequency range from 0.2 MHz to 0.8 MHz. The samples investigated have three different cell sizes or pores per inch (5, 10 and 20 ppi) and each size has three aluminum volume fractions (5%, 8% and 12% AVF). Phase velocities show minor dispersion at low frequencies but remain constant after 0.7 MHz. Pulse broadening and amplitude attenuation are obvious and increase with increasing ppi. Attenuation increases considerably with AVF for 20 ppi foams. Tortuosity ranges from 1.003 to 1.032 and increases with AVF and ppi. However, the increase of tortuosity with AVF is very small for 10 and 20 ppi samples.

  2. Embedding clinical interventions into observational studies.

    PubMed

    Newman, Anne B; Avilés-Santa, M Larissa; Anderson, Garnet; Heiss, Gerardo; Howard, Wm James; Krucoff, Mitchell; Kuller, Lewis H; Lewis, Cora E; Robinson, Jennifer G; Taylor, Herman; Treviño, Roberto P; Weintraub, William

    2016-01-01

    Novel approaches to observational studies and clinical trials could improve the cost-effectiveness and speed of translation of research. Hybrid designs that combine elements of clinical trials with observational registries or cohort studies should be considered as part of a long-term strategy to transform clinical trials and epidemiology, adapting to the opportunities of big data and the challenges of constrained budgets. Important considerations include study aims, timing, breadth and depth of the existing infrastructure that can be leveraged, participant burden, likely participation rate and available sample size in the cohort, required sample size for the trial, and investigator expertise. Community engagement and stakeholder (including study participants) support are essential for these efforts to succeed. Copyright © 2015. Published by Elsevier Inc.

  3. Catching ghosts with a coarse net: use and abuse of spatial sampling data in detecting synchronization

    PubMed Central

    2017-01-01

    Synchronization of population dynamics in different habitats is a frequently observed phenomenon. A common mathematical tool to reveal synchronization is the (cross)correlation coefficient between time courses of values of the population size of a given species where the population size is evaluated from spatial sampling data. The corresponding sampling net or grid is often coarse, i.e. it does not resolve all details of the spatial configuration, and the evaluation error—i.e. the difference between the true value of the population size and its estimated value—can be considerable. We show that this estimation error can make the value of the correlation coefficient very inaccurate or even irrelevant. We consider several population models to show that the value of the correlation coefficient calculated on a coarse sampling grid rarely exceeds 0.5, even if the true value is close to 1, so that the synchronization is effectively lost. We also observe ‘ghost synchronization’ when the correlation coefficient calculated on a coarse sampling grid is close to 1 but in reality the dynamics are not correlated. Finally, we suggest a simple test to check the sampling grid coarseness and hence to distinguish between the true and artifactual values of the correlation coefficient. PMID:28202589

  4. Accuracy or precision: Implications of sample design and methodology on abundance estimation

    USGS Publications Warehouse

    Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.

    2015-01-01

    Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.

  5. The N-Pact Factor: Evaluating the Quality of Empirical Journals with Respect to Sample Size and Statistical Power

    PubMed Central

    Fraley, R. Chris; Vazire, Simine

    2014-01-01

    The authors evaluate the quality of research reported in major journals in social-personality psychology by ranking those journals with respect to their N-pact Factors (NF)—the statistical power of the empirical studies they publish to detect typical effect sizes. Power is a particularly important attribute for evaluating research quality because, relative to studies that have low power, studies that have high power are more likely to (a) to provide accurate estimates of effects, (b) to produce literatures with low false positive rates, and (c) to lead to replicable findings. The authors show that the average sample size in social-personality research is 104 and that the power to detect the typical effect size in the field is approximately 50%. Moreover, they show that there is considerable variation among journals in sample sizes and power of the studies they publish, with some journals consistently publishing higher power studies than others. The authors hope that these rankings will be of use to authors who are choosing where to submit their best work, provide hiring and promotion committees with a superior way of quantifying journal quality, and encourage competition among journals to improve their NF rankings. PMID:25296159

  6. Drop size distributions and related properties of fog for five locations measured from aircraft

    NASA Technical Reports Server (NTRS)

    Zak, J. Allen

    1994-01-01

    Fog drop size distributions were collected from aircraft as part of the Synthetic Vision Technology Demonstration Program. Three west coast marine advection fogs, one frontal fog, and a radiation fog were sampled from the top of the cloud to the bottom as the aircraft descended on a 3-degree glideslope. Drop size versus altitude versus concentration are shown in three dimensional plots for each 10-meter altitude interval from 1-minute samples. Also shown are median volume radius and liquid water content. Advection fogs contained the largest drops with median volume radius of 5-8 micrometers, although the drop sizes in the radiation fog were also large just above the runway surface. Liquid water content increased with height, and the total number of drops generally increased with time. Multimodal variations in number density and particle size were noted in most samples where there was a peak concentration of small drops (2-5 micrometers) at low altitudes, midaltitude peak of drops 5-11 micrometers, and high-altitude peak of the larger drops (11-15 micrometers and above). These observations are compared with others and corroborate previous results in fog gross properties, although there is considerable variation with time and altitude even in the same type of fog.

  7. Design, analysis and presentation of factorial randomised controlled trials

    PubMed Central

    Montgomery, Alan A; Peters, Tim J; Little, Paul

    2003-01-01

    Background The evaluation of more than one intervention in the same randomised controlled trial can be achieved using a parallel group design. However this requires increased sample size and can be inefficient, especially if there is also interest in considering combinations of the interventions. An alternative may be a factorial trial, where for two interventions participants are allocated to receive neither intervention, one or the other, or both. Factorial trials require special considerations, however, particularly at the design and analysis stages. Discussion Using a 2 × 2 factorial trial as an example, we present a number of issues that should be considered when planning a factorial trial. The main design issue is that of sample size. Factorial trials are most often powered to detect the main effects of interventions, since adequate power to detect plausible interactions requires greatly increased sample sizes. The main analytical issues relate to the investigation of main effects and the interaction between the interventions in appropriate regression models. Presentation of results should reflect the analytical strategy with an emphasis on the principal research questions. We also give an example of how baseline and follow-up data should be presented. Lastly, we discuss the implications of the design, analytical and presentational issues covered. Summary Difficulties in interpreting the results of factorial trials if an influential interaction is observed is the cost of the potential for efficient, simultaneous consideration of two or more interventions. Factorial trials can in principle be designed to have adequate power to detect realistic interactions, and in any case they are the only design that allows such effects to be investigated. PMID:14633287

  8. A novel synthesis of a new thorium (IV) metal organic framework nanostructure with well controllable procedure through ultrasound assisted reverse micelle method.

    PubMed

    Sargazi, Ghasem; Afzali, Daryoush; Mostafavi, Ali

    2018-03-01

    Reverse micelle (RM) and ultrasound assisted reverse micelle (UARM) were applied to the synthesis of novel thorium nanostructures as metal organic frameworks (MOFs). Characterization with different techniques showed that the Th-MOF sample synthesized by UARM method had higher thermal stability (354°C), smaller mean particle size (27nm), and larger surface area (2.02×10 3 m 2 /g). Besides, in this novel approach, the nucleation of crystals was found to carry out in a shorter time. The synthesis parameters of UARM method were designed by 2 k-1 factorial and the process control was systematically studied using analysis of variance (ANOVA) and response surface methodology (RSM). ANOVA showed that various factors, including surfactant content, ultrasound duration, temperature, ultrasound power, and interaction between these factors, considerably affected different properties of the Th-MOF samples. According to the 2 k-1 factorial design, the determination coefficient (R 2 ) of the model is 0.999, with no significant lack of fit. The F value of 5432, implied that the model was highly significant and adequate to represent the relationship between the responses and the independent variables, also the large R-adjusted value indicates a good relationship between the experimental data and the fitted model. RSM predicted that it would be possible to produce Th-MOF samples with the thermal stability of 407°C, mean particle size of 13nm, and surface area of 2.20×10 3 m 2 /g. The mechanism controlling the Th-MOF properties was considerably different from the conventional mechanisms. Moreover, the MOF sample synthesized using UARM exhibited higher capacity for nitrogen adsorption as a result of larger pore sizes. It is believed that the UARM method and systematic studies developed in the present work can be considered as a new strategy for their application in other nanoscale MOF samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Colloidal-facilitated transport of inorganic contaminants in ground water: part 1, sampling considerations

    USGS Publications Warehouse

    Puls, Robert W.; Eychaner, James H.; Powell, Robert M.

    1996-01-01

    Investigations at Pinal Creek, Arizona, evaluated routine sampling procedures for determination of aqueous inorganic geochemistry and assessment of contaminant transport by colloidal mobility. Sampling variables included pump type and flow rate, collection under air or nitrogen, and filter pore diameter. During well purging and sample collection, suspended particle size and number as well as dissolved oxygen, temperature, specific conductance, pH, and redox potential were monitored. Laboratory analyses of both unfiltered samples and the filtrates were performed by inductively coupled argon plasma, atomic absorption with graphite furnace, and ion chromatography. Scanning electron microscopy with Energy Dispersive X-ray was also used for analysis of filter particulates. Suspended particle counts consistently required approximately twice as long as the other field-monitored indicators to stabilize. High-flow-rate pumps entrained normally nonmobile particles. Difference in elemental concentrations using different filter-pore sizes were generally not large with only two wells having differences greater than 10 percent in most wells. Similar differences (>10%) were observed for some wells when samples were collected under nitrogen rather than in air. Fe2+/Fe3+ ratios for air-collected samples were smaller than for samples collected under a nitrogen atmosphere, reflecting sampling-induced oxidation.

  10. Calibrating the Ordovician Radiation of marine life: implications for Phanerozoic diversity trends

    NASA Technical Reports Server (NTRS)

    Miller, A. I.; Foote, M.

    1996-01-01

    It has long been suspected that trends in global marine biodiversity calibrated for the Phanerozoic may be affected by sampling problems. However, this possibility has not been evaluated definitively, and raw diversity trends are generally accepted at face value in macroevolutionary investigations. Here, we analyze a global-scale sample of fossil occurrences that allows us to determine directly the effects of sample size on the calibration of what is generally thought to be among the most significant global biodiversity increases in the history of life: the Ordovician Radiation. Utilizing a composite database that includes trilobites, brachiopods, and three classes of molluscs, we conduct rarefaction analyses to demonstrate that the diversification trajectory for the Radiation was considerably different than suggested by raw diversity time-series. Our analyses suggest that a substantial portion of the increase recognized in raw diversity depictions for the last three Ordovician epochs (the Llandeilian, Caradocian, and Ashgillian) is a consequence of increased sample size of the preserved and catalogued fossil record. We also use biometric data for a global sample of Ordovician trilobites, along with methods of measuring morphological diversity that are not biased by sample size, to show that morphological diversification in this major clade had leveled off by the Llanvirnian. The discordance between raw diversity depictions and more robust taxonomic and morphological diversity metrics suggests that sampling effects may strongly influence our perception of biodiversity trends throughout the Phanerozoic.

  11. Design considerations for case series models with exposure onset measurement error.

    PubMed

    Mohammed, Sandra M; Dalrymple, Lorien S; Sentürk, Damla; Nguyen, Danh V

    2013-02-28

    The case series model allows for estimation of the relative incidence of events, such as cardiovascular events, within a pre-specified time window after an exposure, such as an infection. The method requires only cases (individuals with events) and controls for all fixed/time-invariant confounders. The measurement error case series model extends the original case series model to handle imperfect data, where the timing of an infection (exposure) is not known precisely. In this work, we propose a method for power/sample size determination for the measurement error case series model. Extensive simulation studies are used to assess the accuracy of the proposed sample size formulas. We also examine the magnitude of the relative loss of power due to exposure onset measurement error, compared with the ideal situation where the time of exposure is measured precisely. To facilitate the design of case series studies, we provide publicly available web-based tools for determining power/sample size for both the measurement error case series model as well as the standard case series model. Copyright © 2012 John Wiley & Sons, Ltd.

  12. Microwave resonant and zero-field absorption study of doped magnetite prepared by a co-precipitation method.

    PubMed

    Aphesteguy, Juan Carlos; Jacobo, Silvia E; Lezama, Luis; Kurlyandskaya, Galina V; Schegoleva, Nina N

    2014-06-19

    Fe3O4 and ZnxFe3-xO4 pure and doped magnetite magnetic nanoparticles (NPs) were prepared in aqueous solution (Series A) or in a water-ethyl alcohol mixture (Series B) by the co-precipitation method. Only one ferromagnetic resonance line was observed in all cases under consideration indicating that the materials are magnetically uniform. The shortfall in the resonance fields from 3.27 kOe (for the frequency of 9.5 GHz) expected for spheres can be understood taking into account the dipolar forces, magnetoelasticity, or magnetocrystalline anisotropy. All samples show non-zero low field absorption. For Series A samples the grain size decreases with an increase of the Zn content. In this case zero field absorption does not correlate with the changes of the grain size. For Series B samples the grain size and zero field absorption behavior correlate with each other. The highest zero-field absorption corresponded to 0.2 zinc concentration in both A and B series. High zero-field absorption of Fe3O4 ferrite magnetic NPs can be interesting for biomedical applications.

  13. The effect of exit beam phase aberrations on parallel beam coherent x-ray reconstructions

    NASA Astrophysics Data System (ADS)

    Hruszkewycz, S. O.; Harder, R.; Xiao, X.; Fuoss, P. H.

    2010-12-01

    Diffraction artifacts from imperfect x-ray windows near the sample are an important consideration in the design of coherent x-ray diffraction measurements. In this study, we used simulated and experimental diffraction patterns in two and three dimensions to explore the effect of phase imperfections in a beryllium window (such as a void or inclusion) on the convergence behavior of phasing algorithms and on the ultimate reconstruction. A predictive relationship between beam wavelength, sample size, and window position was derived to explain the dependence of reconstruction quality on beryllium defect size. Defects corresponding to this prediction cause the most damage to the sample exit wave and induce signature error oscillations during phasing that can be used as a fingerprint of experimental x-ray window artifacts. The relationship between x-ray window imperfection size and coherent x-ray diffractive imaging reconstruction quality explored in this work can play an important role in designing high-resolution in situ coherent imaging instrumentation and will help interpret the phasing behavior of coherent diffraction measured in these in situ environments.

  14. The effect of exit beam phase aberrations on parallel beam coherent x-ray reconstructions.

    PubMed

    Hruszkewycz, S O; Harder, R; Xiao, X; Fuoss, P H

    2010-12-01

    Diffraction artifacts from imperfect x-ray windows near the sample are an important consideration in the design of coherent x-ray diffraction measurements. In this study, we used simulated and experimental diffraction patterns in two and three dimensions to explore the effect of phase imperfections in a beryllium window (such as a void or inclusion) on the convergence behavior of phasing algorithms and on the ultimate reconstruction. A predictive relationship between beam wavelength, sample size, and window position was derived to explain the dependence of reconstruction quality on beryllium defect size. Defects corresponding to this prediction cause the most damage to the sample exit wave and induce signature error oscillations during phasing that can be used as a fingerprint of experimental x-ray window artifacts. The relationship between x-ray window imperfection size and coherent x-ray diffractive imaging reconstruction quality explored in this work can play an important role in designing high-resolution in situ coherent imaging instrumentation and will help interpret the phasing behavior of coherent diffraction measured in these in situ environments.

  15. Factors Associated with the Performance and Cost-Effectiveness of Using Lymphatic Filariasis Transmission Assessment Surveys for Monitoring Soil-Transmitted Helminths: A Case Study in Kenya

    PubMed Central

    Smith, Jennifer L.; Sturrock, Hugh J. W.; Assefa, Liya; Nikolay, Birgit; Njenga, Sammy M.; Kihara, Jimmy; Mwandawiro, Charles S.; Brooker, Simon J.

    2015-01-01

    Transmission assessment surveys (TAS) for lymphatic filariasis have been proposed as a platform to assess the impact of mass drug administration (MDA) on soil-transmitted helminths (STHs). This study used computer simulation and field data from pre- and post-MDA settings across Kenya to evaluate the performance and cost-effectiveness of the TAS design for STH assessment compared with alternative survey designs. Variations in the TAS design and different sample sizes and diagnostic methods were also evaluated. The district-level TAS design correctly classified more districts compared with standard STH designs in pre-MDA settings. Aggregating districts into larger evaluation units in a TAS design decreased performance, whereas age group sampled and sample size had minimal impact. The low diagnostic sensitivity of Kato-Katz and mini-FLOTAC methods was found to increase misclassification. We recommend using a district-level TAS among children 8–10 years of age to assess STH but suggest that key consideration is given to evaluation unit size. PMID:25487730

  16. Minimum and Maximum Times Required to Obtain Representative Suspended Sediment Samples

    NASA Astrophysics Data System (ADS)

    Gitto, A.; Venditti, J. G.; Kostaschuk, R.; Church, M. A.

    2014-12-01

    Bottle sampling is a convenient method of obtaining suspended sediment measurements for the development of sediment budgets. While these methods are generally considered to be reliable, recent analysis of depth-integrated sampling has identified considerable uncertainty in measurements of grain-size concentration between grain-size classes of multiple samples. Point-integrated bottle sampling is assumed to represent the mean concentration of suspended sediment but the uncertainty surrounding this method is not well understood. Here we examine at-a-point variability in velocity, suspended sediment concentration, grain-size distribution, and grain-size moments to determine if traditional point-integrated methods provide a representative sample of suspended sediment. We present continuous hour-long observations of suspended sediment from the sand-bedded portion of the Fraser River at Mission, British Columbia, Canada, using a LISST laser-diffraction instrument. Spectral analysis suggests that there are no statistically significant peak in energy density, suggesting the absence of periodic fluctuations in flow and suspended sediment. However, a slope break in the spectra at 0.003 Hz corresponds to a period of 5.5 minutes. This coincides with the threshold between large-scale turbulent eddies that scale with channel width/mean velocity and hydraulic phenomena related to channel dynamics. This suggests that suspended sediment samples taken over a period longer than 5.5 minutes incorporate variability that is larger scale than turbulent phenomena in this channel. Examination of 5.5-minute periods of our time series indicate that ~20% of the time a stable mean value of volumetric concentration is reached within 30 seconds, a typical bottle sample duration. In ~12% of measurements a stable mean was not reached over the 5.5 minute sample duration. The remaining measurements achieve a stable mean in an even distribution over the intervening interval.

  17. Validation of abundance estimates from mark–recapture and removal techniques for rainbow trout captured by electrofishing in small streams

    USGS Publications Warehouse

    Rosenberger, Amanda E.; Dunham, Jason B.

    2005-01-01

    Estimation of fish abundance in streams using the removal model or the Lincoln - Peterson mark - recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams. For one-, two-, three-, and four-pass sampling effort in closed sites, we evaluated the influences of fish size and habitat characteristics on sampling efficiency and the accuracy of removal abundance estimates. We also examined the use of models to generate unbiased estimates of fish abundance through adjustment of total catch or biased removal estimates. Our results suggested that the assumptions of the mark - recapture model were satisfied and that abundance estimates based on this approach were unbiased. In contrast, the removal model assumptions were not met. Decreasing sampling efficiencies over removal passes resulted in underestimated population sizes and overestimates of sampling efficiency. This bias decreased, but was not eliminated, with increased sampling effort. Biased removal estimates based on different levels of effort were highly correlated with each other but were less correlated with unbiased mark - recapture estimates. Stream size decreased sampling efficiency, and stream size and instream wood increased the negative bias of removal estimates. We found that reliable estimates of population abundance could be obtained from models of sampling efficiency for different levels of effort. Validation of abundance estimates requires extra attention to routine sampling considerations but can help fisheries biologists avoid pitfalls associated with biased data and facilitate standardized comparisons among studies that employ different sampling methods.

  18. Robustness of methods for blinded sample size re-estimation with overdispersed count data.

    PubMed

    Schneider, Simon; Schmidli, Heinz; Friede, Tim

    2013-09-20

    Counts of events are increasingly common as primary endpoints in randomized clinical trials. With between-patient heterogeneity leading to variances in excess of the mean (referred to as overdispersion), statistical models reflecting this heterogeneity by mixtures of Poisson distributions are frequently employed. Sample size calculation in the planning of such trials requires knowledge on the nuisance parameters, that is, the control (or overall) event rate and the overdispersion parameter. Usually, there is only little prior knowledge regarding these parameters in the design phase resulting in considerable uncertainty regarding the sample size. In this situation internal pilot studies have been found very useful and very recently several blinded procedures for sample size re-estimation have been proposed for overdispersed count data, one of which is based on an EM-algorithm. In this paper we investigate the EM-algorithm based procedure with respect to aspects of their implementation by studying the algorithm's dependence on the choice of convergence criterion and find that the procedure is sensitive to the choice of the stopping criterion in scenarios relevant to clinical practice. We also compare the EM-based procedure to other competing procedures regarding their operating characteristics such as sample size distribution and power. Furthermore, the robustness of these procedures to deviations from the model assumptions is explored. We find that some of the procedures are robust to at least moderate deviations. The results are illustrated using data from the US National Heart, Lung and Blood Institute sponsored Asymptomatic Cardiac Ischemia Pilot study. Copyright © 2013 John Wiley & Sons, Ltd.

  19. Sources of variability and comparability between salmonid stomach contents and isotopic analyses: study design lessons and recommendations

    USGS Publications Warehouse

    Vinson, M.R.; Budy, P.

    2011-01-01

    We compared sources of variability and cost in paired stomach content and stable isotope samples from three salmonid species collected in September 2001–2005 and describe the relative information provided by each method in terms of measuring diet overlap and food web study design. Based on diet analyses, diet overlap among brown trout, rainbow trout, and mountain whitefish was high, and we observed little variation in diets among years. In contrast, for sample sizes n ≥ 25, 95% confidence interval (CI) around mean δ15Ν and δ13C for the three target species did not overlap, and species, year, and fish size effects were significantly different, implying that these species likely consumed similar prey but in different proportions. Stable isotope processing costs were US$12 per sample, while stomach content analysis costs averaged US$25.49 ± $2.91 (95% CI) and ranged from US$1.50 for an empty stomach to US$291.50 for a sample with 2330 items. Precision in both δ15Ν and δ13C and mean diet overlap values based on stomach contents increased considerably up to a sample size of n = 10 and plateaued around n = 25, with little further increase in precision.

  20. Generalized optimal design for two-arm, randomized phase II clinical trials with endpoints from the exponential dispersion family.

    PubMed

    Jiang, Wei; Mahnken, Jonathan D; He, Jianghua; Mayo, Matthew S

    2016-11-01

    For two-arm randomized phase II clinical trials, previous literature proposed an optimal design that minimizes the total sample sizes subject to multiple constraints on the standard errors of the estimated event rates and their difference. The original design is limited to trials with dichotomous endpoints. This paper extends the original approach to be applicable to phase II clinical trials with endpoints from the exponential dispersion family distributions. The proposed optimal design minimizes the total sample sizes needed to provide estimates of population means of both arms and their difference with pre-specified precision. Its applications on data from specific distribution families are discussed under multiple design considerations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. A Proposed Approach for Joint Modeling of the Longitudinal and Time-To-Event Data in Heterogeneous Populations: An Application to HIV/AIDS's Disease.

    PubMed

    Roustaei, Narges; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf

    2018-01-01

    In recent years, the joint models have been widely used for modeling the longitudinal and time-to-event data simultaneously. In this study, we proposed an approach (PA) to study the longitudinal and survival outcomes simultaneously in heterogeneous populations. PA relaxes the assumption of conditional independence (CI). We also compared PA with joint latent class model (JLCM) and separate approach (SA) for various sample sizes (150, 300, and 600) and different association parameters (0, 0.2, and 0.5). The average bias of parameters estimation (AB-PE), average SE of parameters estimation (ASE-PE), and coverage probability of the 95% confidence interval (CP) among the three approaches were compared. In most cases, when the sample sizes increased, AB-PE and ASE-PE decreased for the three approaches, and CP got closer to the nominal level of 0.95. When there was a considerable association, PA in comparison with SA and JLCM performed better in the sense that PA had the smallest AB-PE and ASE-PE for the longitudinal submodel among the three approaches for the small and moderate sample sizes. Moreover, JLCM was desirable for the none-association and the large sample size. Finally, the evaluated approaches were applied on a real HIV/AIDS dataset for validation, and the results were compared.

  2. Mechanisms of Laser-Induced Dissection and Transport of Histologic Specimens

    PubMed Central

    Vogel, Alfred; Lorenz, Kathrin; Horneffer, Verena; Hüttmann, Gereon; von Smolinski, Dorthe; Gebert, Andreas

    2007-01-01

    Rapid contact- and contamination-free procurement of histologic material for proteomic and genomic analysis can be achieved by laser microdissection of the sample of interest followed by laser-induced transport (laser pressure catapulting). The dynamics of laser microdissection and laser pressure catapulting of histologic samples of 80 μm diameter was investigated by means of time-resolved photography. The working mechanism of microdissection was found to be plasma-mediated ablation initiated by linear absorption. Catapulting was driven by plasma formation when tightly focused pulses were used, and by photothermal ablation at the bottom of the sample when defocused pulses producing laser spot diameters larger than 35 μm were used. With focused pulses, driving pressures of several hundred MPa accelerated the specimen to initial velocities of 100–300 m/s before they were rapidly slowed down by air friction. When the laser spot was increased to a size comparable to or larger than the sample diameter, both driving pressure and flight velocity decreased considerably. Based on a characterization of the thermal and optical properties of the histologic specimens and supporting materials used, we calculated the evolution of the heat distribution in the sample. Selected catapulted samples were examined by scanning electron microscopy or analyzed by real-time reverse-transcriptase polymerase chain reaction. We found that catapulting of dissected samples results in little collateral damage when the laser pulses are either tightly focused or when the laser spot size is comparable to the specimen size. By contrast, moderate defocusing with spot sizes up to one-third of the specimen diameter may involve significant heat and ultraviolet exposure. Potential side effects are maximal when samples are catapulted directly from a glass slide without a supporting polymer foil. PMID:17766336

  3. Morphological diversity of Trichuris spp. eggs observed during an anthelminthic drug trial in Yunnan, China, and relative performance of parasitologic diagnostic tools.

    PubMed

    Steinmann, Peter; Rinaldi, Laura; Cringoli, Giuseppe; Du, Zun-Wei; Marti, Hanspeter; Jiang, Jin-Yong; Zhou, Hui; Zhou, Xiao-Nong; Utzinger, Jürg

    2015-01-01

    The presence of large Trichuris spp. eggs in human faecal samples is occasionally reported. Such eggs have been described as variant Trichuris trichiura or Trichuris vulpis eggs. Within the frame of a randomised controlled trial, faecal samples collected from 115 Bulang individuals from Yunnan, People's Republic of China were subjected to the Kato-Katz technique (fresh stool samples) and the FLOTAC and ether-concentration techniques (sodium acetate-acetic acid-formalin (SAF)-fixed stool samples). Large Trichuris spp. eggs were noted in faecal samples with a prevalence of 6.1% before and 21.7% after anthelminthic drug administration. The observed prevalence of standard-sized T. trichiura eggs was reduced from 93.0% to 87.0% after treatment. Considerably more cases of large Trichuris spp. eggs and slightly more cases with normal-sized T. trichiura eggs were identified by FLOTAC compared to the ether-concentration technique. No large Trichuris spp. eggs were observed on the Kato-Katz thick smears. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Random Distribution Pattern and Non-adaptivity of Genome Size in a Highly Variable Population of Festuca pallens

    PubMed Central

    Šmarda, Petr; Bureš, Petr; Horová, Lucie

    2007-01-01

    Background and Aims The spatial and statistical distribution of genome sizes and the adaptivity of genome size to some types of habitat, vegetation or microclimatic conditions were investigated in a tetraploid population of Festuca pallens. The population was previously documented to vary highly in genome size and is assumed as a model for the study of the initial stages of genome size differentiation. Methods Using DAPI flow cytometry, samples were measured repeatedly with diploid Festuca pallens as the internal standard. Altogether 172 plants from 57 plots (2·25 m2), distributed in contrasting habitats over the whole locality in South Moravia, Czech Republic, were sampled. The differences in DNA content were confirmed by the double peaks of simultaneously measured samples. Key Results At maximum, a 1·115-fold difference in genome size was observed. The statistical distribution of genome sizes was found to be continuous and best fits the extreme (Gumbel) distribution with rare occurrences of extremely large genomes (positive-skewed), as it is similar for the log-normal distribution of the whole Angiosperms. Even plants from the same plot frequently varied considerably in genome size and the spatial distribution of genome sizes was generally random and unautocorrelated (P > 0·05). The observed spatial pattern and the overall lack of correlations of genome size with recognized vegetation types or microclimatic conditions indicate the absence of ecological adaptivity of genome size in the studied population. Conclusions These experimental data on intraspecific genome size variability in Festuca pallens argue for the absence of natural selection and the selective non-significance of genome size in the initial stages of genome size differentiation, and corroborate the current hypothetical model of genome size evolution in Angiosperms (Bennetzen et al., 2005, Annals of Botany 95: 127–132). PMID:17565968

  5. Role of sediment size and biostratinomy on the development of biofilms in recent avian vertebrate remains

    NASA Astrophysics Data System (ADS)

    Peterson, Joseph E.; Lenczewski, Melissa E.; Clawson, Steven R.; Warnock, Jonathan P.

    2017-04-01

    Microscopic soft tissues have been identified in fossil vertebrate remains collected from various lithologies. However, the diagenetic mechanisms to preserve such tissues have remained elusive. While previous studies have described infiltration of biofilms in Haversian and Volkmann’s canals, biostratinomic alteration (e.g., trampling), and iron derived from hemoglobin as playing roles in the preservation processes, the influence of sediment texture has not previously been investigated. This study uses a Kolmogorov Smirnov Goodness-of-Fit test to explore the influence of biostratinomic variability and burial media against the infiltration of biofilms in bone samples. Controlled columns of sediment with bone samples were used to simulate burial and subsequent groundwater flow. Sediments used in this study include clay-, silt-, and sand-sized particles modeled after various fluvial facies commonly associated with fossil vertebrates. Extant limb bone samples obtained from Gallus gallus domesticus (Domestic Chicken) buried in clay-rich sediment exhibit heavy biofilm infiltration, while bones buried in sands and silts exhibit moderate levels. Crushed bones exhibit significantly lower biofilm infiltration than whole bone samples. Strong interactions between biostratinomic alteration and sediment size are also identified with respect to biofilm development. Sediments modeling crevasse splay deposits exhibit considerable variability; whole-bone crevasse splay samples exhibit higher frequencies of high-level biofilm infiltration, and crushed-bone samples in modeled crevasse splay deposits display relatively high frequencies of low-level biofilm infiltration. These results suggest that sediment size, depositional setting, and biostratinomic condition play key roles in biofilm infiltration in vertebrate remains, and may influence soft tissue preservation in fossil vertebrates.

  6. Maximizing ecological and evolutionary insight in bisulfite sequencing data sets

    PubMed Central

    Lea, Amanda J.; Vilgalys, Tauras P.; Durst, Paul A.P.; Tung, Jenny

    2017-01-01

    Preface Genome-scale bisulfite sequencing approaches have opened the door to ecological and evolutionary studies of DNA methylation in many organisms. These approaches can be powerful. However, they introduce new methodological and statistical considerations, some of which are particularly relevant to non-model systems. Here, we highlight how these considerations influence a study’s power to link methylation variation with a predictor variable of interest. Relative to current practice, we argue that sample sizes will need to increase to provide robust insights. We also provide recommendations for overcoming common challenges and an R Shiny app to aid in study design. PMID:29046582

  7. Development of size reduction equations for calculating power input for grinding pine wood chips using hammer mill

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naimi, Ladan J.; Collard, Flavien; Bi, Xiaotao

    Size reduction is an unavoidable operation for preparing biomass for biofuels and bioproduct conversion. Yet, there is considerable uncertainty in power input requirement and the uniformity of ground biomass. Considerable gains are possible if the required power input for a size reduction ratio is estimated accurately. In this research three well-known mechanistic equations attributed to Rittinger, Kick, and Bond available for predicting energy input for grinding pine wood chips were tested against experimental grinding data. Prior to testing, samples of pine wood chips were conditioned to 11.7% wb, moisture content. The wood chips were successively ground in a hammer millmore » using screen sizes of 25.4 mm, 10 mm, 6.4 mm, and 3.2 mm. The input power and the flow of material into the grinder were recorded continuously. The recorded power input vs. mean particle size showed that the Rittinger equation had the best fit to the experimental data. The ground particle sizes were 4 to 7 times smaller than the size of installed screen. Geometric mean size of particles were calculated using two methods (1) Tyler sieves and using particle size analysis and (2) Sauter mean diameter calculated from the ratio of volume to surface that were estimated from measured length and width. The two mean diameters agreed well, pointing to the fact that either mechanical sieving or particle imaging can be used to characterize particle size. In conclusion, specific energy input to the hammer mill increased from 1.4 kWh t –1 (5.2 J g –1) for large 25.1-mm screen to 25 kWh t –1 (90.4 J g –1) for small 3.2-mm screen.« less

  8. Development of size reduction equations for calculating power input for grinding pine wood chips using hammer mill

    DOE PAGES

    Naimi, Ladan J.; Collard, Flavien; Bi, Xiaotao; ...

    2016-01-05

    Size reduction is an unavoidable operation for preparing biomass for biofuels and bioproduct conversion. Yet, there is considerable uncertainty in power input requirement and the uniformity of ground biomass. Considerable gains are possible if the required power input for a size reduction ratio is estimated accurately. In this research three well-known mechanistic equations attributed to Rittinger, Kick, and Bond available for predicting energy input for grinding pine wood chips were tested against experimental grinding data. Prior to testing, samples of pine wood chips were conditioned to 11.7% wb, moisture content. The wood chips were successively ground in a hammer millmore » using screen sizes of 25.4 mm, 10 mm, 6.4 mm, and 3.2 mm. The input power and the flow of material into the grinder were recorded continuously. The recorded power input vs. mean particle size showed that the Rittinger equation had the best fit to the experimental data. The ground particle sizes were 4 to 7 times smaller than the size of installed screen. Geometric mean size of particles were calculated using two methods (1) Tyler sieves and using particle size analysis and (2) Sauter mean diameter calculated from the ratio of volume to surface that were estimated from measured length and width. The two mean diameters agreed well, pointing to the fact that either mechanical sieving or particle imaging can be used to characterize particle size. In conclusion, specific energy input to the hammer mill increased from 1.4 kWh t –1 (5.2 J g –1) for large 25.1-mm screen to 25 kWh t –1 (90.4 J g –1) for small 3.2-mm screen.« less

  9. Design considerations for examining trends in avian abundance using point counts: examples from oak woodlands

    Treesearch

    Kathryn L. Purcell; Sylvia R. Mori; Mary K. Chase

    2005-01-01

    We used data from two oak-woodland sites in California to develop guidelines for the design of bird monitoring programs using point counts. We used power analysis to determine sample size adequacy when varying the number of visits, count stations, and years for examining trends in abundance. We assumed an overdispersed Poisson distribution for count data, with...

  10. Size-segregated sugar composition of transported dust aerosols from Middle-East over Delhi during March 2012

    NASA Astrophysics Data System (ADS)

    Kumar, S.; Aggarwal, S. G.; Fu, P. Q.; Kang, M.; Sarangi, B.; Sinha, D.; Kotnala, R. K.

    2017-06-01

    During March 20-22, 2012 Delhi experienced a massive dust-storm which originated in Middle-East. Size segregated sampling of these dust aerosols was performed using a nine staged Andersen sampler (5 sets of samples were collected including before dust-storm (BDS)), dust-storm day 1 to 3 (DS1 to DS3) and after dust storm (ADS). Sugars (mono and disaccharides, sugar-alcohols and anhydro-sugars) were determined using GC-MS technique. It was observed that on the onset of dust-storm, total suspended particulate matter (TSPM, sum of all stages) concentration in DS1 sample increased by > 2.5 folds compared to that of BDS samples. Interestingly, fine particulate matter (sum of stages with cutoff size < 2.1 μm) loading in DS1 also increased by > 2.5 folds as compared to that of BDS samples. Sugars analyzed in DS1 coarse mode (sum of stages with cutoff size > 2.1 μm) samples showed a considerable increase ( 1.7-2.8 folds) compared to that of other samples. It was further observed that mono-saccharides, disaccharides and sugar-alcohols concentrations were enhanced in giant (> 9.0 μm) particles in DS1 samples as compared to other samples. On the other hand, anhydro-sugars comprised 13-27% of sugars in coarse mode particles and were mostly found in fine mode constituting 66-85% of sugars in all the sample types. Trehalose showed an enhanced ( 2-4 folds) concentration in DS1 aerosol samples in both coarse (62.80 ng/m3) and fine (8.57 ng/m3) mode. This increase in Trehalose content in both coarse and fine mode suggests their origin to the transported desert dust and supports their candidature as an organic tracer for desert dust entrainments. Further, levoglucosan to mannosan (L/M) ratios which have been used to predict the type of biomass burning influences on aerosols are found to be size dependent in these samples. These ratios are higher for fine mode particles, hence should be used with caution while interpreting the sources using this tool.

  11. The size, mass, and composition of plastic debris in the western North Atlantic Ocean.

    PubMed

    Morét-Ferguson, Skye; Law, Kara Lavender; Proskurowski, Giora; Murphy, Ellen K; Peacock, Emily E; Reddy, Christopher M

    2010-10-01

    This study reports the first inventory of physical properties of individual plastic debris in the North Atlantic. We analyzed 748 samples for size, mass, and material composition collected from surface net tows on 11 expeditions from Cape Cod, Massachusetts to the Caribbean Sea between 1991 and 2007. Particles were mostly fragments less than 10mm in size with nearly all lighter than 0.05 g. Material densities ranged from 0.808 to 1.24 g ml(-1), with about half between 0.97 and 1.04 g ml(-1), a range not typically found in virgin plastics. Elemental analysis suggests that samples in this density range are consistent with polypropylene and polyethylene whose densities have increased, likely due to biofouling. Pelagic densities varied considerably from that of beach plastic debris, suggesting that plastic particles are modified during their residence at sea. These analyses provide clues in understanding particle fate and potential debris sources, and address ecological implications of pelagic plastic debris. Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. Selecting the optimum plot size for a California design-based stream and wetland mapping program.

    PubMed

    Lackey, Leila G; Stein, Eric D

    2014-04-01

    Accurate estimates of the extent and distribution of wetlands and streams are the foundation of wetland monitoring, management, restoration, and regulatory programs. Traditionally, these estimates have relied on comprehensive mapping. However, this approach is prohibitively resource-intensive over large areas, making it both impractical and statistically unreliable. Probabilistic (design-based) approaches to evaluating status and trends provide a more cost-effective alternative because, compared with comprehensive mapping, overall extent is inferred from mapping a statistically representative, randomly selected subset of the target area. In this type of design, the size of sample plots has a significant impact on program costs and on statistical precision and accuracy; however, no consensus exists on the appropriate plot size for remote monitoring of stream and wetland extent. This study utilized simulated sampling to assess the performance of four plot sizes (1, 4, 9, and 16 km(2)) for three geographic regions of California. Simulation results showed smaller plot sizes (1 and 4 km(2)) were most efficient for achieving desired levels of statistical accuracy and precision. However, larger plot sizes were more likely to contain rare and spatially limited wetland subtypes. Balancing these considerations led to selection of 4 km(2) for the California status and trends program.

  13. Optimization of the two-sample rank Neyman-Pearson detector

    NASA Astrophysics Data System (ADS)

    Akimov, P. S.; Barashkov, V. M.

    1984-10-01

    The development of optimal algorithms concerned with rank considerations in the case of finite sample sizes involves considerable mathematical difficulties. The present investigation provides results related to the design and the analysis of an optimal rank detector based on a utilization of the Neyman-Pearson criteria. The detection of a signal in the presence of background noise is considered, taking into account n observations (readings) x1, x2, ... xn in the experimental communications channel. The computation of the value of the rank of an observation is calculated on the basis of relations between x and the variable y, representing interference. Attention is given to conditions in the absence of a signal, the probability of the detection of an arriving signal, details regarding the utilization of the Neyman-Pearson criteria, the scheme of an optimal rank, multichannel, incoherent detector, and an analysis of the detector.

  14. Characterization of fly ash from low-sulfur and high-sulfur coal sources: Partitioning of carbon and trace elements with particle size

    USGS Publications Warehouse

    Hower, J.C.; Trimble, A.S.; Eble, C.F.; Palmer, C.A.; Kolker, A.

    1999-01-01

    Fly ash samples were collected in November and December of 1994, from generating units at a Kentucky power station using high- and low-sulfur feed coals. The samples are part of a two-year study of the coal and coal combustion byproducts from the power station. The ashes were wet screened at 100, 200, 325, and 500 mesh (150, 75, 42, and 25 ??m, respectively). The size fractions were then dried, weighed, split for petrographic and chemical analysis, and analyzed for ash yield and carbon content. The low-sulfur "heavy side" and "light side" ashes each have a similar size distribution in the November samples. In contrast, the December fly ashes showed the trend observed in later months, the light-side ash being finer (over 20 % more ash in the -500 mesh [-25 ??m] fraction) than the heavy-side ash. Carbon tended to be concentrated in the coarse fractions in the December samples. The dominance of the -325 mesh (-42 ??m) fractions in the overall size analysis implies, though, that carbon in the fine sizes may be an important consideration in the utilization of the fly ash. Element partitioning follows several patterns. Volatile elements, such as Zn and As, are enriched in the finer sizes, particularly in fly ashes collected at cooler, light-side electrostatic precipitator (ESP) temperatures. The latter trend is a function of precipitation at the cooler-ESP temperatures and of increasing concentration with the increased surface area of the finest fraction. Mercury concentrations are higher in high-carbon fly ashes, suggesting Hg adsorption on the fly ash carbon. Ni and Cr are associated, in part, with the spinel minerals in the fly ash. Copyright ?? 1999 Taylor & Francis.

  15. Frequency of Bolton tooth-size discrepancies among orthodontic patients.

    PubMed

    Freeman, J E; Maskeroni, A J; Lorton, L

    1996-07-01

    The purpose of this study was to determine the percentage of orthodontic patients who present with an interarch tooth-size discrepancy likely to affect treatment planning or results. The Bolton tooth-size discrepancies of 157 patients accepted for treatment in an orthodontic residency program were evaluated for the frequency and the magnitude of deviation from Bolton's mean. Discrepancies outside of 2 SD were considered as potentially significant with regard to treatment planning and treatment results. Although the mean of the sample was nearly identical to that of Bolton's, the range and standard deviation varied considerably with a large percentage of the orthodontic patients having discrepancies outside of Bolton's 2 SD. With such a high frequency of significant discrepancies it would seem prudent to routinely perform a tooth-size analysis and incorporate the findings into orthodontic treatment planning.

  16. Operationalizing hippocampal volume as an enrichment biomarker for amnestic MCI trials: effect of algorithm, test-retest variability and cut-point on trial cost, duration and sample size

    PubMed Central

    Yu, P.; Sun, J.; Wolz, R.; Stephenson, D.; Brewer, J.; Fox, N.C.; Cole, P.E.; Jack, C.R.; Hill, D.L.G.; Schwarz, A.J.

    2014-01-01

    Objective To evaluate the effect of computational algorithm, measurement variability and cut-point on hippocampal volume (HCV)-based patient selection for clinical trials in mild cognitive impairment (MCI). Methods We used normal control and amnestic MCI subjects from ADNI-1 as normative reference and screening cohorts. We evaluated the enrichment performance of four widely-used hippocampal segmentation algorithms (FreeSurfer, HMAPS, LEAP and NeuroQuant) in terms of two-year changes in MMSE, ADAS-Cog and CDR-SB. We modeled the effect of algorithm, test-retest variability and cut-point on sample size, screen fail rates and trial cost and duration. Results HCV-based patient selection yielded not only reduced sample sizes (by ~40–60%) but also lower trial costs (by ~30–40%) across a wide range of cut-points. Overall, the dependence on the cut-point value was similar for the three clinical instruments considered. Conclusion These results provide a guide to the choice of HCV cut-point for aMCI clinical trials, allowing an informed trade-off between statistical and practical considerations. PMID:24211008

  17. Taking Costs and Diagnostic Test Accuracy into Account When Designing Prevalence Studies: An Application to Childhood Tuberculosis Prevalence.

    PubMed

    Wang, Zhuoyu; Dendukuri, Nandini; Pai, Madhukar; Joseph, Lawrence

    2017-11-01

    When planning a study to estimate disease prevalence to a pre-specified precision, it is of interest to minimize total testing cost. This is particularly challenging in the absence of a perfect reference test for the disease because different combinations of imperfect tests need to be considered. We illustrate the problem and a solution by designing a study to estimate the prevalence of childhood tuberculosis in a hospital setting. All possible combinations of 3 commonly used tuberculosis tests, including chest X-ray, tuberculin skin test, and a sputum-based test, either culture or Xpert, are considered. For each of the 11 possible test combinations, 3 Bayesian sample size criteria, including average coverage criterion, average length criterion and modified worst outcome criterion, are used to determine the required sample size and total testing cost, taking into consideration prior knowledge about the accuracy of the tests. In some cases, the required sample sizes and total testing costs were both reduced when more tests were used, whereas, in other examples, lower costs are achieved with fewer tests. Total testing cost should be formally considered when designing a prevalence study.

  18. Effect of mechanical alloying synthesis process on the dielectric properties of (Bi{sub 0.5}Na{sub 0.5}){sub 0.94}Ba{sub 0.06}TiO{sub 3} piezoceramics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghazanfari, Mohammad Reza, E-mail: Ghazanfari.mr@gmail.com; Amini, Rasool; Shams, Seyyedeh Fatemeh

    Highlights: • MA samples show higher dielectric permittivity and Curie temperature. • In MA samples, dielectric loss is almost 27% less than conventional ones. • In MA samples, sintering time and temperature are lower than conventional ones. • In MA samples, particle morphology is more homogeneous conventional ones. • In MA samples, crystallite size is smaller conventional ones. - Abstract: In present work, in order to study the effects of synthesis techniques on dielectric properties, the BNBT lead-free piezoceramics with (Bi{sub 0.5}Na{sub 0.5}){sub 0.94}Ba{sub 0.06}TiO{sub 3} stoichiometry (called as BNBT6) were synthesized by mechanical alloying (MA) and conventional mixed oxidesmore » methods. The structural, microstructural, and dielectric properties were carried out by X-ray diffractometer (XRD), scanning electron microscope (SEM), and impedance analyzer LCR meter, respectively. Based on results, the density of MA samples is considerably higher than conventional samples owning to smaller particles size and more uniformity of particle shape of MA samples. Moreover, the dielectric properties of MA samples are comparatively improved in which the dielectric loss of these samples is almost 27% less than conventional ones. Furthermore, MA samples exhibit obviously higher dielectric permittivity and Curie temperature compared to the conventional samples.« less

  19. Examining the reliability and validity of an abbreviated Psychopathic Personality Inventory-Revised (PPI-R) in four samples.

    PubMed

    Ruchensky, Jared R; Edens, John F; Donnellan, M Brent; Witt, Edward A

    2017-02-01

    A recently developed 40-item short-form of the Psychopathic Personality Inventory-Revised (PPI-R; Lilienfeld & Widows, 2005) has shown considerable promise as an alternative to the long-form of the instrument (Eisenbarth, Lilienfeld, & Yarkoni, 2015). Beyond the initial construction of the short-form, however, Eisenbarth et al. only evaluated a small number of external correlates in a German college student sample. In this study, we evaluate the internal consistency of the short-form scales in 4 samples previously administered the full PPI-R (3 U.S. college student samples and 1 U.S. forensic psychiatric inpatient sample) and examine a wide range of external correlates to compare the nomological nets of the short- and long-forms. Across all 4 samples, correlations between each short-form scale and its corresponding long-form scale were uniformly high (all rs > .75). In terms of external correlates, the pattern of associations was exceedingly similar, for the short-form and long-form composites with a largely trivial reduction in effect size. Collectively, our findings offer considerable support for the utility of this new short-form as a substitute for the full PPI-R. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Evolution of porous structure and texture in nanoporous SiO2/Al2O3 materials during calcination

    NASA Astrophysics Data System (ADS)

    Glazkova, Elena A.; Bakina, Olga V.

    2016-11-01

    The study focuses on the evolution of porous structure and texture of silica/alumina xerogels during calcination in the temperature range from 500 to 1200°C. The xerogel was prepared via sol-gel method using subcritical drying. The silica/alumina xerogels were examined using transmission electron microscopy-energy dispersive spectroscopy (TEM-EDS), Brunauer Emmett Teller-Barrett Joyner Halenda (BET-BJH), differential scanning calorimetry (DSC), and Fourier transform infrared (FTIR) spectroscopy. SiO2 primary particles of size about 10 nm are connected with each other to form a porous xerogel structure. Alumina is uniformly distributed over the xerogel volume. The changes of textural characteristics under heat treatment of samples are radical; the specific surface area and pore size attain their maximum at 500-700°C. The heat treatment of samples causes dehydroxylation of the xerogel surface, and at 1200°C the sample is sintered, loses mesoporosity, and its specific surface area reduces considerably down to 78 m2/g.

  1. Interval estimation and optimal design for the within-subject coefficient of variation for continuous and binary variables

    PubMed Central

    Shoukri, Mohamed M; Elkum, Nasser; Walter, Stephen D

    2006-01-01

    Background In this paper we propose the use of the within-subject coefficient of variation as an index of a measurement's reliability. For continuous variables and based on its maximum likelihood estimation we derive a variance-stabilizing transformation and discuss confidence interval construction within the framework of a one-way random effects model. We investigate sample size requirements for the within-subject coefficient of variation for continuous and binary variables. Methods We investigate the validity of the approximate normal confidence interval by Monte Carlo simulations. In designing a reliability study, a crucial issue is the balance between the number of subjects to be recruited and the number of repeated measurements per subject. We discuss efficiency of estimation and cost considerations for the optimal allocation of the sample resources. The approach is illustrated by an example on Magnetic Resonance Imaging (MRI). We also discuss the issue of sample size estimation for dichotomous responses with two examples. Results For the continuous variable we found that the variance stabilizing transformation improves the asymptotic coverage probabilities on the within-subject coefficient of variation for the continuous variable. The maximum like estimation and sample size estimation based on pre-specified width of confidence interval are novel contribution to the literature for the binary variable. Conclusion Using the sample size formulas, we hope to help clinical epidemiologists and practicing statisticians to efficiently design reliability studies using the within-subject coefficient of variation, whether the variable of interest is continuous or binary. PMID:16686943

  2. Variation in aluminum, iron, and particle concentrations in oxic ground-water samples collected by use of tangential-flow ultrafiltration with low-flow sampling

    USGS Publications Warehouse

    Szabo, Z.; Oden, J.H.; Gibs, J.; Rice, D.E.; Ding, Y.; ,

    2001-01-01

    Particulates that move with ground water and those that are artificially mobilized during well purging could be incorporated into water samples during collection and could cause trace-element concentrations to vary in unfiltered samples, and possibly in filtered samples (typically 0.45-um (micron) pore size) as well, depending on the particle-size fractions present. Therefore, measured concentrations may not be representative of those in the aquifer. Ground water may contain particles of various sizes and shapes that are broadly classified as colloids, which do not settle from water, and particulates, which do. In order to investigate variations in trace-element concentrations in ground-water samples as a function of particle concentrations and particle-size fractions, the U.S. Geological Survey, in cooperation with the U.S. Air Force, collected samples from five wells completed in the unconfined, oxic Kirkwood-Cohansey aquifer system of the New Jersey Coastal Plain. Samples were collected by purging with a portable pump at low flow (0.2-0.5 liters per minute and minimal drawdown, ideally less than 0.5 foot). Unfiltered samples were collected in the following sequence: (1) within the first few minutes of pumping, (2) after initial turbidity declined and about one to two casing volumes of water had been purged, and (3) after turbidity values had stabilized at less than 1 to 5 Nephelometric Turbidity Units. Filtered samples were split concurrently through (1) a 0.45-um pore size capsule filter, (2) a 0.45-um pore size capsule filter and a 0.0029-um pore size tangential-flow filter in sequence, and (3), in selected cases, a 0.45-um and a 0.05-um pore size capsule filter in sequence. Filtered samples were collected concurrently with the unfiltered sample that was collected when turbidity values stabilized. Quality-assurance samples consisted of sequential duplicates (about 25 percent) and equipment blanks. Concentrations of particles were determined by light scattering. Variations in concentrations aluminum and iron (1 -74 and 1-199 ug/L (micrograms per liter), respectively), common indicators of the presence of particulate-borne trace elements, were greatest in sample sets from individual wells with the greatest variations in turbidity and particle concentration. Differences in trace-element concentrations in sequentially collected unfiltered samples with variable turbidity were 5 to 10 times as great as those in concurrently collected samples that were passed through various filters. These results indicate that turbidity must be both reduced and stabilized even when low-flow sample-collection techniques are used in order to obtain water samples that do not contain considerable particulate artifacts. Currently (2001) available techniques need to be refined to ensure that the measured trace-element concentrations are representative of those that are mobile in the aquifer water.

  3. Recreation Carrying Capacity Facts and Considerations. Report 11. Surry Mountain Lake Project Area.

    DTIC Science & Technology

    1980-07-01

    contributions of practical experience and knowledge , along with their assistance in arranging schedules, have made this carrying capacity research effort...This survey obtained six responses from boaters and water- skiers . 29 A~ .~ ~ ~ ~ ~ ~ ~ ~~~~~~~PEEI PAG ... ._DIAN.K+.+.+ 3+ ,.+ -,++ + _-NO FILM...User characteristics Table 17 indicates the characteristics of the boaters and water- skiers surveyed at Surry. The small sample size at

  4. Where statistics and molecular microarray experiments biology meet.

    PubMed

    Kelmansky, Diana M

    2013-01-01

    This review chapter presents a statistical point of view to microarray experiments with the purpose of understanding the apparent contradictions that often appear in relation to their results. We give a brief introduction of molecular biology for nonspecialists. We describe microarray experiments from their construction and the biological principles the experiments rely on, to data acquisition and analysis. The role of epidemiological approaches and sample size considerations are also discussed.

  5. Exploring how to increase response rates to surveys of older people.

    PubMed

    Palonen, Mira; Kaunonen, Marja; Åstedt-Kurki, Päivi

    2016-05-01

    To address the special considerations that need to be taken into account when collecting data from older people in healthcare research. An objective of all research studies is to ensure there is an adequate sample size. The final sample size will be influenced by methods of recruitment and data collection, among other factors. There are some special considerations that need to be addressed when collecting data among older people. Quantitative surveys of people aged 60 or over in 2009-2014 were analysed using statistical methods. A quantitative study of patients aged 75 or over in an emergency department was used as an example. A methodological approach to analysing quantitative studies concerned with older people. The best way to ensure high response rates in surveys involving people aged 60 or over is to collect data in the presence of the researcher; response rates are lowest in posted surveys and settings where the researcher is not present when data are collected. Response rates do not seem to vary according to the database from which information about the study participants is obtained or according to who is responsible for recruitment to the survey. Implications for research/practice To conduct coherent studies with older people, the data collection process should be carefully considered.

  6. The Number of Patients and Events Required to Limit the Risk of Overestimation of Intervention Effects in Meta-Analysis—A Simulation Study

    PubMed Central

    Thorlund, Kristian; Imberger, Georgina; Walsh, Michael; Chu, Rong; Gluud, Christian; Wetterslev, Jørn; Guyatt, Gordon; Devereaux, Philip J.; Thabane, Lehana

    2011-01-01

    Background Meta-analyses including a limited number of patients and events are prone to yield overestimated intervention effect estimates. While many assume bias is the cause of overestimation, theoretical considerations suggest that random error may be an equal or more frequent cause. The independent impact of random error on meta-analyzed intervention effects has not previously been explored. It has been suggested that surpassing the optimal information size (i.e., the required meta-analysis sample size) provides sufficient protection against overestimation due to random error, but this claim has not yet been validated. Methods We simulated a comprehensive array of meta-analysis scenarios where no intervention effect existed (i.e., relative risk reduction (RRR) = 0%) or where a small but possibly unimportant effect existed (RRR = 10%). We constructed different scenarios by varying the control group risk, the degree of heterogeneity, and the distribution of trial sample sizes. For each scenario, we calculated the probability of observing overestimates of RRR>20% and RRR>30% for each cumulative 500 patients and 50 events. We calculated the cumulative number of patients and events required to reduce the probability of overestimation of intervention effect to 10%, 5%, and 1%. We calculated the optimal information size for each of the simulated scenarios and explored whether meta-analyses that surpassed their optimal information size had sufficient protection against overestimation of intervention effects due to random error. Results The risk of overestimation of intervention effects was usually high when the number of patients and events was small and this risk decreased exponentially over time as the number of patients and events increased. The number of patients and events required to limit the risk of overestimation depended considerably on the underlying simulation settings. Surpassing the optimal information size generally provided sufficient protection against overestimation. Conclusions Random errors are a frequent cause of overestimation of intervention effects in meta-analyses. Surpassing the optimal information size will provide sufficient protection against overestimation. PMID:22028777

  7. What is a species? A new universal method to measure differentiation and assess the taxonomic rank of allopatric populations, using continuous variables

    PubMed Central

    Donegan, Thomas M.

    2018-01-01

    Abstract Existing models for assigning species, subspecies, or no taxonomic rank to populations which are geographically separated from one another were analyzed. This was done by subjecting over 3,000 pairwise comparisons of vocal or biometric data based on birds to a variety of statistical tests that have been proposed as measures of differentiation. One current model which aims to test diagnosability (Isler et al. 1998) is highly conservative, applying a hard cut-off, which excludes from consideration differentiation below diagnosis. It also includes non-overlap as a requirement, a measure which penalizes increases to sample size. The “species scoring” model of Tobias et al. (2010) involves less drastic cut-offs, but unlike Isler et al. (1998), does not control adequately for sample size and attributes scores in many cases to differentiation which is not statistically significant. Four different models of assessing effect sizes were analyzed: using both pooled and unpooled standard deviations and controlling for sample size using t-distributions or omitting to do so. Pooled standard deviations produced more conservative effect sizes when uncontrolled for sample size but less conservative effect sizes when so controlled. Pooled models require assumptions to be made that are typically elusive or unsupported for taxonomic studies. Modifications to improving these frameworks are proposed, including: (i) introducing statistical significance as a gateway to attributing any weighting to findings of differentiation; (ii) abandoning non-overlap as a test; (iii) recalibrating Tobias et al. (2010) scores based on effect sizes controlled for sample size using t-distributions. A new universal method is proposed for measuring differentiation in taxonomy using continuous variables and a formula is proposed for ranking allopatric populations. This is based first on calculating effect sizes using unpooled standard deviations, controlled for sample size using t-distributions, for a series of different variables. All non-significant results are excluded by scoring them as zero. Distance between any two populations is calculated using Euclidian summation of non-zeroed effect size scores. If the score of an allopatric pair exceeds that of a related sympatric pair, then the allopatric population can be ranked as species and, if not, then at most subspecies rank should be assigned. A spreadsheet has been programmed and is being made available which allows this and other tests of differentiation and rank studied in this paper to be rapidly analyzed. PMID:29780266

  8. Consideration of Kaolinite Interference Correction for Quartz Measurements in Coal Mine Dust

    PubMed Central

    Lee, Taekhee; Chisholm, William P.; Kashon, Michael; Key-Schwartz, Rosa J.; Harper, Martin

    2015-01-01

    Kaolinite interferes with the infrared analysis of quartz. Improper correction can cause over- or underestimation of silica concentration. The standard sampling method for quartz in coal mine dust is size selective, and, since infrared spectrometry is sensitive to particle size, it is intuitively better to use the same size fractions for quantification of quartz and kaolinite. Standard infrared spectrometric methods for quartz measurement in coal mine dust correct interference from the kaolinite, but they do not specify a particle size for the material used for correction. This study compares calibration curves using as-received and respirable size fractions of nine different examples of kaolinite in the different correction methods from the National Institute for Occupational Safety and Health Manual of Analytical Methods (NMAM) 7603 and the Mine Safety and Health Administration (MSHA) P-7. Four kaolinites showed significant differences between calibration curves with as-received and respirable size fractions for NMAM 7603 and seven for MSHA P-7. The quartz mass measured in 48 samples spiked with respirable fraction silica and kaolinite ranged between 0.28 and 23% (NMAM 7603) and 0.18 and 26% (MSHA P-7) of the expected applied mass when the kaolinite interference was corrected with respirable size fraction kaolinite. This is termed “deviation,” not bias, because the applied mass is also subject to unknown variance. Generally, the deviations in the spiked samples are larger when corrected with the as-received size fraction of kaolinite than with the respirable size fraction. Results indicate that if a kaolinite correction with reference material of respirable size fraction is applied in current standard methods for quartz measurement in coal mine dust, the quartz result would be somewhat closer to the true exposure, although the actual mass difference would be small. Most kinds of kaolinite can be used for laboratory calibration, but preferably, the size fraction should be the same as the coal dust being collected. PMID:23767881

  9. Consideration of kaolinite interference correction for quartz measurements in coal mine dust.

    PubMed

    Lee, Taekhee; Chisholm, William P; Kashon, Michael; Key-Schwartz, Rosa J; Harper, Martin

    2013-01-01

    Kaolinite interferes with the infrared analysis of quartz. Improper correction can cause over- or underestimation of silica concentration. The standard sampling method for quartz in coal mine dust is size selective, and, since infrared spectrometry is sensitive to particle size, it is intuitively better to use the same size fractions for quantification of quartz and kaolinite. Standard infrared spectrometric methods for quartz measurement in coal mine dust correct interference from the kaolinite, but they do not specify a particle size for the material used for correction. This study compares calibration curves using as-received and respirable size fractions of nine different examples of kaolinite in the different correction methods from the National Institute for Occupational Safety and Health Manual of Analytical Methods (NMAM) 7603 and the Mine Safety and Health Administration (MSHA) P-7. Four kaolinites showed significant differences between calibration curves with as-received and respirable size fractions for NMAM 7603 and seven for MSHA P-7. The quartz mass measured in 48 samples spiked with respirable fraction silica and kaolinite ranged between 0.28 and 23% (NMAM 7603) and 0.18 and 26% (MSHA P-7) of the expected applied mass when the kaolinite interference was corrected with respirable size fraction kaolinite. This is termed "deviation," not bias, because the applied mass is also subject to unknown variance. Generally, the deviations in the spiked samples are larger when corrected with the as-received size fraction of kaolinite than with the respirable size fraction. Results indicate that if a kaolinite correction with reference material of respirable size fraction is applied in current standard methods for quartz measurement in coal mine dust, the quartz result would be somewhat closer to the true exposure, although the actual mass difference would be small. Most kinds of kaolinite can be used for laboratory calibration, but preferably, the size fraction should be the same as the coal dust being collected.

  10. Processing-property relations in YBa2Cu3O(6+x) superconductors

    NASA Astrophysics Data System (ADS)

    Safari, A.; Wachtman, J. B., Jr.; Parkhe, V.; Caracciolo, R.; Jeter, D.

    Processing of YBa2Cu3O(6+x) superconducting samples by employing different precursor powder preparation techniques such as ball milling, attrition milling, and narrow particle size distribution powder preparation through coprecipitation by spraying will be discussed. CuO coated with oxalates shows the lowest resistance above Tc up to room temperature. The extent of corrosion by water has been studied by employing magnetic susceptibility, XPS, and X-ray diffraction. Superconducting samples are affected to a considerable extent when treated in water at 60 C and the severity of the attack increases with time.

  11. Sequencing chess

    NASA Astrophysics Data System (ADS)

    Atashpendar, Arshia; Schilling, Tanja; Voigtmann, Thomas

    2016-10-01

    We analyze the structure of the state space of chess by means of transition path sampling Monte Carlo simulations. Based on the typical number of moves required to transpose a given configuration of chess pieces into another, we conclude that the state space consists of several pockets between which transitions are rare. Skilled players explore an even smaller subset of positions that populate some of these pockets only very sparsely. These results suggest that the usual measures to estimate both the size of the state space and the size of the tree of legal moves are not unique indicators of the complexity of the game, but that considerations regarding the connectedness of states are equally important.

  12. Sampling procedures for throughfall monitoring: A simulation study

    NASA Astrophysics Data System (ADS)

    Zimmermann, Beate; Zimmermann, Alexander; Lark, Richard Murray; Elsenbeer, Helmut

    2010-01-01

    What is the most appropriate sampling scheme to estimate event-based average throughfall? A satisfactory answer to this seemingly simple question has yet to be found, a failure which we attribute to previous efforts' dependence on empirical studies. Here we try to answer this question by simulating stochastic throughfall fields based on parameters for statistical models of large monitoring data sets. We subsequently sampled these fields with different sampling designs and variable sample supports. We evaluated the performance of a particular sampling scheme with respect to the uncertainty of possible estimated means of throughfall volumes. Even for a relative error limit of 20%, an impractically large number of small, funnel-type collectors would be required to estimate mean throughfall, particularly for small events. While stratification of the target area is not superior to simple random sampling, cluster random sampling involves the risk of being less efficient. A larger sample support, e.g., the use of trough-type collectors, considerably reduces the necessary sample sizes and eliminates the sensitivity of the mean to outliers. Since the gain in time associated with the manual handling of troughs versus funnels depends on the local precipitation regime, the employment of automatically recording clusters of long troughs emerges as the most promising sampling scheme. Even so, a relative error of less than 5% appears out of reach for throughfall under heterogeneous canopies. We therefore suspect a considerable uncertainty of input parameters for interception models derived from measured throughfall, in particular, for those requiring data of small throughfall events.

  13. Longitudinal design considerations to optimize power to detect variances and covariances among rates of change: Simulation results based on actual longitudinal studies

    PubMed Central

    Rast, Philippe; Hofer, Scott M.

    2014-01-01

    We investigated the power to detect variances and covariances in rates of change in the context of existing longitudinal studies using linear bivariate growth curve models. Power was estimated by means of Monte Carlo simulations. Our findings show that typical longitudinal study designs have substantial power to detect both variances and covariances among rates of change in a variety of cognitive, physical functioning, and mental health outcomes. We performed simulations to investigate the interplay among number and spacing of occasions, total duration of the study, effect size, and error variance on power and required sample size. The relation between growth rate reliability (GRR) and effect size to the sample size required to detect power ≥ .80 was non-linear, with rapidly decreasing sample sizes needed as GRR increases. The results presented here stand in contrast to previous simulation results and recommendations (Hertzog, Lindenberger, Ghisletta, & von Oertzen, 2006; Hertzog, von Oertzen, Ghisletta, & Lindenberger, 2008; von Oertzen, Ghisletta, & Lindenberger, 2010), which are limited due to confounds between study length and number of waves, error variance with GCR, and parameter values which are largely out of bounds of actual study values. Power to detect change is generally low in the early phases (i.e. first years) of longitudinal studies but can substantially increase if the design is optimized. We recommend additional assessments, including embedded intensive measurement designs, to improve power in the early phases of long-term longitudinal studies. PMID:24219544

  14. Crystal Face Distributions and Surface Site Densities of Two Synthetic Goethites: Implications for Adsorption Capacities as a Function of Particle Size.

    PubMed

    Livi, Kenneth J T; Villalobos, Mario; Leary, Rowan; Varela, Maria; Barnard, Jon; Villacís-García, Milton; Zanella, Rodolfo; Goodridge, Anna; Midgley, Paul

    2017-09-12

    Two synthetic goethites of varying crystal size distributions were analyzed by BET, conventional TEM, cryo-TEM, atomic resolution STEM and HRTEM, and electron tomography in order to determine the effects of crystal size, shape, and atomic scale surface roughness on their adsorption capacities. The two samples were determined by BET to have very different site densities based on Cr VI adsorption experiments. Model specific surface areas generated from TEM observations showed that, based on size and shape, there should be little difference in their adsorption capacities. Electron tomography revealed that both samples crystallized with an asymmetric {101} tablet habit. STEM and HRTEM images showed a significant increase in atomic-scale surface roughness of the larger goethite. This difference in roughness was quantified based on measurements of relative abundances of crystal faces {101} and {201} for the two goethites, and a reactive surface site density was calculated for each goethite. Singly coordinated sites on face {210} are 2.5 more dense than on face {101}, and the larger goethite showed an average total of 36% {210} as compared to 14% for the smaller goethite. This difference explains the considerably larger adsorption capacitiy of the larger goethite vs the smaller sample and points toward the necessity of knowing the atomic scale surface structure in predicting mineral adsorption processes.

  15. Planning Considerations for a Mars Sample Receiving Facility: Summary and Interpretation of Three Design Studies

    NASA Astrophysics Data System (ADS)

    Beaty, David W.; Allen, Carlton C.; Bass, Deborah S.; Buxbaum, Karen L.; Campbell, James K.; Lindstrom, David J.; Miller, Sylvia L.; Papanastassiou, Dimitri A.

    2009-10-01

    It has been widely understood for many years that an essential component of a Mars Sample Return mission is a Sample Receiving Facility (SRF). The purpose of such a facility would be to take delivery of the flight hardware that lands on Earth, open the spacecraft and extract the sample container and samples, and conduct an agreed-upon test protocol, while ensuring strict containment and contamination control of the samples while in the SRF. Any samples that are found to be non-hazardous (or are rendered non-hazardous by sterilization) would then be transferred to long-term curation. Although the general concept of an SRF is relatively straightforward, there has been considerable discussion about implementation planning. The Mars Exploration Program carried out an analysis of the attributes of an SRF to establish its scope, including minimum size and functionality, budgetary requirements (capital cost, operating costs, cost profile), and development schedule. The approach was to arrange for three independent design studies, each led by an architectural design firm, and compare the results. While there were many design elements in common identified by each study team, there were significant differences in the way human operators were to interact with the systems. In aggregate, the design studies provided insight into the attributes of a future SRF and the complex factors to consider for future programmatic planning.

  16. Planning considerations for a Mars Sample Receiving Facility: summary and interpretation of three design studies.

    PubMed

    Beaty, David W; Allen, Carlton C; Bass, Deborah S; Buxbaum, Karen L; Campbell, James K; Lindstrom, David J; Miller, Sylvia L; Papanastassiou, Dimitri A

    2009-10-01

    It has been widely understood for many years that an essential component of a Mars Sample Return mission is a Sample Receiving Facility (SRF). The purpose of such a facility would be to take delivery of the flight hardware that lands on Earth, open the spacecraft and extract the sample container and samples, and conduct an agreed-upon test protocol, while ensuring strict containment and contamination control of the samples while in the SRF. Any samples that are found to be non-hazardous (or are rendered non-hazardous by sterilization) would then be transferred to long-term curation. Although the general concept of an SRF is relatively straightforward, there has been considerable discussion about implementation planning. The Mars Exploration Program carried out an analysis of the attributes of an SRF to establish its scope, including minimum size and functionality, budgetary requirements (capital cost, operating costs, cost profile), and development schedule. The approach was to arrange for three independent design studies, each led by an architectural design firm, and compare the results. While there were many design elements in common identified by each study team, there were significant differences in the way human operators were to interact with the systems. In aggregate, the design studies provided insight into the attributes of a future SRF and the complex factors to consider for future programmatic planning.

  17. Karyological features of wild and cultivated forms of myrtle (Myrtus communis, Myrtaceae).

    PubMed

    Serçe, S; Ekbiç, E; Suda, J; Gündüz, K; Kiyga, Y

    2010-03-09

    Myrtle is an evergreen shrub or small tree widespread throughout the Mediterranean region. In Turkey, both cultivated and wild forms, differing in plant and fruit size and fruit composition, can be found. These differences may have resulted from the domestication of the cultivated form over a long period of time. We investigated whether wild and cultivated forms of myrtle differ in karyological features (i.e., number of somatic chromosomes and relative genome size). We sampled two wild forms and six cultivated types of myrtle. All the samples had the same chromosome number (2n = 2x = 22). The results were confirmed by 4',6-diamidino-2-phenylindole (DAPI) flow cytometry. Only negligible variation (approximately 3%) in relative fluorescence intensity was observed among the different myrtle accessions, with wild genotypes having the smallest values. We concluded that despite considerable morphological differentiation, cultivated and wild myrtle genotypes in Turkey have similar karyological features.

  18. A Quarter Century of Variation in Color and Allometric Characteristics of Eggs from a Rain Forest Population of the Pearly-eyed Thrasher (Margarops fuscatus).

    Treesearch

    WAYNE J. ARENDT

    2004-01-01

    Egg color, size, and shape vary considerably within and among female Pearly-eyed Thrashers (Margarops fuscatus). Results of a 25-yr study (1979-2004) are presented to provide comparative data. In a sample of 4,128 eggs, typical shape was prolate spheroid; but several variations were observed, depending on the age, stature, and physiological condition of the female, as...

  19. Malaria Vaccine Study Site in Irian Jaya, Indonesia: Plasmodium Falciparum Incidence Measurements and Epidemiologic Considerations in Sample Size Estimation

    DTIC Science & Technology

    1994-01-01

    an’iiel andve hwVr miiinonimmn duhs, Th- %%I 1 7 e d the rro lralJ 1!%-irncd.m lc it, u- I "n’. ea- L’s-’. E - didateJ vacin utp~ ntiltder ~inte (ian’r...start alence of Plaitasdiwn fidkipartom among six of aplaebo-ontolld vacin stdy. ncead-populations with limited histories of exposure toa plceb

  20. Considerations for Integrating Women into Closed Occupations in the U.S. Special Operations Forces

    DTIC Science & Technology

    2015-05-01

    effectiveness of integration. Ideally, studies adopting an experimental design (using both test and control groups ) would be preferred, but sample sizes may...data -- a survey of SOF personnel and a series of focus group discussions -- collected by the research team regarding the potential challenges to... controlled positions. This report summarizes our research , analysis, and conclusions. We used a mixed-methods approach. We reviewed the current state of

  1. Size matters at deep-sea hydrothermal vents: different diversity and habitat fidelity patterns of meio- and macrofauna

    PubMed Central

    Gollner, Sabine; Govenar, Breea; Fisher, Charles R.; Bright, Monika

    2015-01-01

    Species with markedly different sizes interact when sharing the same habitat. Unravelling mechanisms that control diversity thus requires consideration of a range of size classes. We compared patterns of diversity and community structure for meio- and macrofaunal communities sampled along a gradient of environmental stress at deep-sea hydrothermal vents on the East Pacific Rise (9° 50′ N) and neighboring basalt habitats. Both meio- and macrofaunal species richnesses were lowest in the high-stress vent habitat, but macrofaunal richness was highest among intermediate-stress vent habitats. Meiofaunal species richness was negatively correlated with stress, and highest on the basalt. In these deep-sea basalt habitats surrounding hydrothermal vents, meiofaunal species richness was consistently higher than that of macrofauna. Consideration of the physiological capabilities and life history traits of different-sized animals suggests that different patterns of diversity may be caused by different capabilities to deal with environmental stress in the 2 size classes. In contrast to meiofauna, adaptations of macrofauna may have evolved to allow them to maintain their physiological homeostasis in a variety of hydrothermal vent habitats and exploit this food-rich deep-sea environment in high abundances. The habitat fidelity patterns also differed: macrofaunal species occurred primarily at vents and were generally restricted to this habitat, but meiofaunal species were distributed more evenly across proximate and distant basalt habitats and were thus not restricted to vent habitats. Over evolutionary time scales these contrasting patterns are likely driven by distinct reproduction strategies and food demands inherent to fauna of different sizes. PMID:26166922

  2. Effect size and statistical power in the rodent fear conditioning literature - A systematic review.

    PubMed

    Carneiro, Clarissa F D; Moulin, Thiago C; Macleod, Malcolm R; Amaral, Olavo B

    2018-01-01

    Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science.

  3. Effect size and statistical power in the rodent fear conditioning literature – A systematic review

    PubMed Central

    Macleod, Malcolm R.

    2018-01-01

    Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science. PMID:29698451

  4. Extracting samples of high diversity from thematic collections of large gene banks using a genetic-distance based approach

    PubMed Central

    2010-01-01

    Background Breeding programs are usually reluctant to evaluate and use germplasm accessions other than the elite materials belonging to their advanced populations. The concept of core collections has been proposed to facilitate the access of potential users to samples of small sizes, representative of the genetic variability contained within the gene pool of a specific crop. The eventual large size of a core collection perpetuates the problem it was originally proposed to solve. The present study suggests that, in addition to the classic core collection concept, thematic core collections should be also developed for a specific crop, composed of a limited number of accessions, with a manageable size. Results The thematic core collection obtained meets the minimum requirements for a core sample - maintenance of at least 80% of the allelic richness of the thematic collection, with, approximately, 15% of its size. The method was compared with other methodologies based on the M strategy, and also with a core collection generated by random sampling. Higher proportions of retained alleles (in a core collection of equal size) or similar proportions of retained alleles (in a core collection of smaller size) were detected in the two methods based on the M strategy compared to the proposed methodology. Core sub-collections constructed by different methods were compared regarding the increase or maintenance of phenotypic diversity. No change on phenotypic diversity was detected by measuring the trait "Weight of 100 Seeds", for the tested sampling methods. Effects on linkage disequilibrium between unlinked microsatellite loci, due to sampling, are discussed. Conclusions Building of a thematic core collection was here defined by prior selection of accessions which are diverse for the trait of interest, and then by pairwise genetic distances, estimated by DNA polymorphism analysis at molecular marker loci. The resulting thematic core collection potentially reflects the maximum allele richness with the smallest sample size from a larger thematic collection. As an example, we used the development of a thematic core collection for drought tolerance in rice. It is expected that such thematic collections increase the use of germplasm by breeding programs and facilitate the study of the traits under consideration. The definition of a core collection to study drought resistance is a valuable contribution towards the understanding of the genetic control and the physiological mechanisms involved in water use efficiency in plants. PMID:20576152

  5. Uncertainties in selected river water quality data

    NASA Astrophysics Data System (ADS)

    Rode, M.; Suhr, U.

    2007-02-01

    Monitoring of surface waters is primarily done to detect the status and trends in water quality and to identify whether observed trends arise from natural or anthropogenic causes. Empirical quality of river water quality data is rarely certain and knowledge of their uncertainties is essential to assess the reliability of water quality models and their predictions. The objective of this paper is to assess the uncertainties in selected river water quality data, i.e. suspended sediment, nitrogen fraction, phosphorus fraction, heavy metals and biological compounds. The methodology used to structure the uncertainty is based on the empirical quality of data and the sources of uncertainty in data (van Loon et al., 2005). A literature review was carried out including additional experimental data of the Elbe river. All data of compounds associated with suspended particulate matter have considerable higher sampling uncertainties than soluble concentrations. This is due to high variability within the cross section of a given river. This variability is positively correlated with total suspended particulate matter concentrations. Sampling location has also considerable effect on the representativeness of a water sample. These sampling uncertainties are highly site specific. The estimation of uncertainty in sampling can only be achieved by taking at least a proportion of samples in duplicates. Compared to sampling uncertainties, measurement and analytical uncertainties are much lower. Instrument quality can be stated well suited for field and laboratory situations for all considered constituents. Analytical errors can contribute considerably to the overall uncertainty of river water quality data. Temporal autocorrelation of river water quality data is present but literature on general behaviour of water quality compounds is rare. For meso scale river catchments (500-3000 km2) reasonable yearly dissolved load calculations can be achieved using biweekly sample frequencies. For suspended sediments none of the methods investigated produced very reliable load estimates when weekly concentrations data were used. Uncertainties associated with loads estimates based on infrequent samples will decrease with increasing size of rivers.

  6. Uncertainties in selected surface water quality data

    NASA Astrophysics Data System (ADS)

    Rode, M.; Suhr, U.

    2006-09-01

    Monitoring of surface waters is primarily done to detect the status and trends in water quality and to identify whether observed trends arise form natural or anthropogenic causes. Empirical quality of surface water quality data is rarely certain and knowledge of their uncertainties is essential to assess the reliability of water quality models and their predictions. The objective of this paper is to assess the uncertainties in selected surface water quality data, i.e. suspended sediment, nitrogen fraction, phosphorus fraction, heavy metals and biological compounds. The methodology used to structure the uncertainty is based on the empirical quality of data and the sources of uncertainty in data (van Loon et al., 2006). A literature review was carried out including additional experimental data of the Elbe river. All data of compounds associated with suspended particulate matter have considerable higher sampling uncertainties than soluble concentrations. This is due to high variability's within the cross section of a given river. This variability is positively correlated with total suspended particulate matter concentrations. Sampling location has also considerable effect on the representativeness of a water sample. These sampling uncertainties are highly site specific. The estimation of uncertainty in sampling can only be achieved by taking at least a proportion of samples in duplicates. Compared to sampling uncertainties measurement and analytical uncertainties are much lower. Instrument quality can be stated well suited for field and laboratory situations for all considered constituents. Analytical errors can contribute considerable to the overall uncertainty of surface water quality data. Temporal autocorrelation of surface water quality data is present but literature on general behaviour of water quality compounds is rare. For meso scale river catchments reasonable yearly dissolved load calculations can be achieved using biweekly sample frequencies. For suspended sediments none of the methods investigated produced very reliable load estimates when weekly concentrations data were used. Uncertainties associated with loads estimates based on infrequent samples will decrease with increasing size of rivers.

  7. Cryo-tomography Tilt-series Alignment with Consideration of the Beam-induced Sample Motion

    PubMed Central

    Fernandez, Jose-Jesus; Li, Sam; Bharat, Tanmay A. M.; Agard, David A.

    2018-01-01

    Recent evidence suggests that the beam-induced motion of the sample during tilt-series acquisition is a major resolution-limiting factor in electron cryo-tomography (cryoET). It causes suboptimal tilt-series alignment and thus deterioration of the reconstruction quality. Here we present a novel approach to tilt-series alignment and tomographic reconstruction that considers the beam-induced sample motion through the tilt-series. It extends the standard fiducial-based alignment approach in cryoET by introducing quadratic polynomials to model the sample motion. The model can be used during reconstruction to yield a motion-compensated tomogram. We evaluated our method on various datasets with different sample sizes. The results demonstrate that our method could be a useful tool to improve the quality of tomograms and the resolution in cryoET. PMID:29410148

  8. Heterogeneity in small aliquots of Apolllo 15 olivine-normative basalt: Implications for breccia clast studies

    NASA Astrophysics Data System (ADS)

    Lindstrom, Marilyn M.; Shervais, John W.; Vetter, Scott K.

    1993-05-01

    Most of the recent advances in lunar petrology are the direct result of breccia pull-apart studies, which have identified a wide array of new highland and mare basalt rock types that occur only as clasts within the breccias. These rocks show that the lunar crust is far more complex than suspected previously, and that processes such as magma mixing and wall-rock assimilation were important in its petrogenesis. These studies are based on the implicit assumption that the breccia clasts, which range in size from a few mm to several cm across, are representative of the parent rock from which they were derived. In many cases, the aliquot allocated for analysis may be only a few grain diameters across. While this problem is most acute for coarse-grained highland rocks, it can also cause considerable uncertainty in the analysis of mare basalt clasts. Similar problems arise with small aliquots of individual hand samples. Our study of sample heterogeneity in 9 samples of Apollo 15 olivine normative basalt (ONB) which exhibit a range in average grain size from coarse to fine are reported. Seven of these samples have not been analyzed previously, one has been analyzed by INAA only, and one has been analyzed by XRF+INAA. Our goal is to assess the effects of small aliquot size on the bulk chemistry of large mare basalt samples, and to extend this assessment to analyses of small breccia clasts.

  9. Heterogeneity in small aliquots of Apolllo 15 olivine-normative basalt: Implications for breccia clast studies

    NASA Technical Reports Server (NTRS)

    Lindstrom, Marilyn M.; Shervais, John W.; Vetter, Scott K.

    1993-01-01

    Most of the recent advances in lunar petrology are the direct result of breccia pull-apart studies, which have identified a wide array of new highland and mare basalt rock types that occur only as clasts within the breccias. These rocks show that the lunar crust is far more complex than suspected previously, and that processes such as magma mixing and wall-rock assimilation were important in its petrogenesis. These studies are based on the implicit assumption that the breccia clasts, which range in size from a few mm to several cm across, are representative of the parent rock from which they were derived. In many cases, the aliquot allocated for analysis may be only a few grain diameters across. While this problem is most acute for coarse-grained highland rocks, it can also cause considerable uncertainty in the analysis of mare basalt clasts. Similar problems arise with small aliquots of individual hand samples. Our study of sample heterogeneity in 9 samples of Apollo 15 olivine normative basalt (ONB) which exhibit a range in average grain size from coarse to fine are reported. Seven of these samples have not been analyzed previously, one has been analyzed by INAA only, and one has been analyzed by XRF+INAA. Our goal is to assess the effects of small aliquot size on the bulk chemistry of large mare basalt samples, and to extend this assessment to analyses of small breccia clasts.

  10. Operationalizing hippocampal volume as an enrichment biomarker for amnestic mild cognitive impairment trials: effect of algorithm, test-retest variability, and cut point on trial cost, duration, and sample size.

    PubMed

    Yu, Peng; Sun, Jia; Wolz, Robin; Stephenson, Diane; Brewer, James; Fox, Nick C; Cole, Patricia E; Jack, Clifford R; Hill, Derek L G; Schwarz, Adam J

    2014-04-01

    The objective of this study was to evaluate the effect of computational algorithm, measurement variability, and cut point on hippocampal volume (HCV)-based patient selection for clinical trials in mild cognitive impairment (MCI). We used normal control and amnestic MCI subjects from the Alzheimer's Disease Neuroimaging Initiative 1 (ADNI-1) as normative reference and screening cohorts. We evaluated the enrichment performance of 4 widely used hippocampal segmentation algorithms (FreeSurfer, Hippocampus Multi-Atlas Propagation and Segmentation (HMAPS), Learning Embeddings Atlas Propagation (LEAP), and NeuroQuant) in terms of 2-year changes in Mini-Mental State Examination (MMSE), Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog), and Clinical Dementia Rating Sum of Boxes (CDR-SB). We modeled the implications for sample size, screen fail rates, and trial cost and duration. HCV based patient selection yielded reduced sample sizes (by ∼40%-60%) and lower trial costs (by ∼30%-40%) across a wide range of cut points. These results provide a guide to the choice of HCV cut point for amnestic MCI clinical trials, allowing an informed tradeoff between statistical and practical considerations. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Multilevel Factorial Experiments for Developing Behavioral Interventions: Power, Sample Size, and Resource Considerations†

    PubMed Central

    Dziak, John J.; Nahum-Shani, Inbal; Collins, Linda M.

    2012-01-01

    Factorial experimental designs have many potential advantages for behavioral scientists. For example, such designs may be useful in building more potent interventions, by helping investigators to screen several candidate intervention components simultaneously and decide which are likely to offer greater benefit before evaluating the intervention as a whole. However, sample size and power considerations may challenge investigators attempting to apply such designs, especially when the population of interest is multilevel (e.g., when students are nested within schools, or employees within organizations). In this article we examine the feasibility of factorial experimental designs with multiple factors in a multilevel, clustered setting (i.e., of multilevel multifactor experiments). We conduct Monte Carlo simulations to demonstrate how design elements such as the number of clusters, the number of lower-level units, and the intraclass correlation affect power. Our results suggest that multilevel, multifactor experiments are feasible for factor-screening purposes, because of the economical properties of complete and fractional factorial experimental designs. We also discuss resources for sample size planning and power estimation for multilevel factorial experiments. These results are discussed from a resource management perspective, in which the goal is to choose a design that maximizes the scientific benefit using the resources available for an investigation. PMID:22309956

  12. Statistical methods for conducting agreement (comparison of clinical tests) and precision (repeatability or reproducibility) studies in optometry and ophthalmology.

    PubMed

    McAlinden, Colm; Khadka, Jyoti; Pesudovs, Konrad

    2011-07-01

    The ever-expanding choice of ocular metrology and imaging equipment has driven research into the validity of their measurements. Consequently, studies of the agreement between two instruments or clinical tests have proliferated in the ophthalmic literature. It is important that researchers apply the appropriate statistical tests in agreement studies. Correlation coefficients are hazardous and should be avoided. The 'limits of agreement' method originally proposed by Altman and Bland in 1983 is the statistical procedure of choice. Its step-by-step use and practical considerations in relation to optometry and ophthalmology are detailed in addition to sample size considerations and statistical approaches to precision (repeatability or reproducibility) estimates. Ophthalmic & Physiological Optics © 2011 The College of Optometrists.

  13. Distribution and diversity of cytotypes in Dianthus broteri as evidenced by genome size variations.

    PubMed

    Balao, Francisco; Casimiro-Soriguer, Ramón; Talavera, María; Herrera, Javier; Talavera, Salvador

    2009-10-01

    Studying the spatial distribution of cytotypes and genome size in plants can provide valuable information about the evolution of polyploid complexes. Here, the spatial distribution of cytological races and the amount of DNA in Dianthus broteri, an Iberian carnation with several ploidy levels, is investigated. Sample chromosome counts and flow cytometry (using propidium iodide) were used to determine overall genome size (2C value) and ploidy level in 244 individuals of 25 populations. Both fresh and dried samples were investigated. Differences in 2C and 1Cx values among ploidy levels within biogeographical provinces were tested using ANOVA. Geographical correlations of genome size were also explored. Extensive variation in chromosomes numbers (2n = 2x = 30, 2n = 4x = 60, 2n = 6x = 90 and 2n = 12x =180) was detected, and the dodecaploid cytotype is reported for the first time in this genus. As regards cytotype distribution, six populations were diploid, 11 were tetraploid, three were hexaploid and five were dodecaploid. Except for one diploid population containing some triploid plants (2n = 45), the remaining populations showed a single cytotype. Diploids appeared in two disjunct areas (south-east and south-west), and so did tetraploids (although with a considerably wider geographic range). Dehydrated leaf samples provided reliable measurements of DNA content. Genome size varied significantly among some cytotypes, and also extensively within diploid (up to 1.17-fold) and tetraploid (1.22-fold) populations. Nevertheless, variations were not straightforwardly congruent with ecology and geographical distribution. Dianthus broteri shows the highest diversity of cytotypes known to date in the genus Dianthus. Moreover, some cytotypes present remarkable internal genome size variation. The evolution of the complex is discussed in terms of autopolyploidy, with primary and secondary contact zones.

  14. Experimental toxicology: Issues of statistics, experimental design, and replication.

    PubMed

    Briner, Wayne; Kirwan, Jeral

    2017-01-01

    The difficulty of replicating experiments has drawn considerable attention. Issues with replication occur for a variety of reasons ranging from experimental design to laboratory errors to inappropriate statistical analysis. Here we review a variety of guidelines for statistical analysis, design, and execution of experiments in toxicology. In general, replication can be improved by using hypothesis driven experiments with adequate sample sizes, randomization, and blind data collection techniques. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Effect of gamma irradiation on rheological properties of polysaccharides exuded by A. fluccosus and A. gossypinus.

    PubMed

    Alijani, Samira; Balaghi, Sima; Mohammadifar, Mohammad Amin

    2011-11-01

    In this study, Iranian gum tragacanth (GT) exudates from Astragalus fluccosus (AFG) and Astragalus gossypinus (AGG) were irradiated at 3, 7, 10 and 15 kGy. Fourier transform infrared spectroscopy (FTIR) data showed that irradiation did not induce changes in the chemical structure of either type of gum. Although particle size distribution and both steady shear and dynamic rheological properties were considerably affected by the irradiation process, the magnitude of the effect of irradiation on each of the rheological and size variables was different for the hydrocolloids. For instance, for AGG, increasing the irradiation dose from 3 to 10 kGy, the d(0.5) and D[3,2] values were reduced by one-sixth to one-eighth fold. Colour measurement revealed that the radiation process led to an increase in the yellow index and b* values for both types of GT in powder form, but it was more pronounced for AGG samples. Irradiation led to an approximate 13-fold increase in redness in AFG. Surface and shape changes of the gum crystals were studied by scanning electron microscope (SEM) and a smoother surface for irradiated samples was detected. The notable changes in functional properties of each variety of irradiated gum should be taken into consideration before using the radiation technology as a commercial tool for sterilisation. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. SU-E-I-46: Sample-Size Dependence of Model Observers for Estimating Low-Contrast Detection Performance From CT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiser, I; Lu, Z

    2014-06-01

    Purpose: Recently, task-based assessment of diagnostic CT systems has attracted much attention. Detection task performance can be estimated using human observers, or mathematical observer models. While most models are well established, considerable bias can be introduced when performance is estimated from a limited number of image samples. Thus, the purpose of this work was to assess the effect of sample size on bias and uncertainty of two channelized Hotelling observers and a template-matching observer. Methods: The image data used for this study consisted of 100 signal-present and 100 signal-absent regions-of-interest, which were extracted from CT slices. The experimental conditions includedmore » two signal sizes and five different x-ray beam current settings (mAs). Human observer performance for these images was determined in 2-alternative forced choice experiments. These data were provided by the Mayo clinic in Rochester, MN. Detection performance was estimated from three observer models, including channelized Hotelling observers (CHO) with Gabor or Laguerre-Gauss (LG) channels, and a template-matching observer (TM). Different sample sizes were generated by randomly selecting a subset of image pairs, (N=20,40,60,80). Observer performance was quantified as proportion of correct responses (PC). Bias was quantified as the relative difference of PC for 20 and 80 image pairs. Results: For n=100, all observer models predicted human performance across mAs and signal sizes. Bias was 23% for CHO (Gabor), 7% for CHO (LG), and 3% for TM. The relative standard deviation, σ(PC)/PC at N=20 was highest for the TM observer (11%) and lowest for the CHO (Gabor) observer (5%). Conclusion: In order to make image quality assessment feasible in the clinical practice, a statistically efficient observer model, that can predict performance from few samples, is needed. Our results identified two observer models that may be suited for this task.« less

  17. Recruitment and retention of participants in randomised controlled trials: a review of trials funded and published by the United Kingdom Health Technology Assessment Programme.

    PubMed

    Walters, Stephen J; Bonacho Dos Anjos Henriques-Cadby, Inês; Bortolami, Oscar; Flight, Laura; Hind, Daniel; Jacques, Richard M; Knox, Christopher; Nadin, Ben; Rothwell, Joanne; Surtees, Michael; Julious, Steven A

    2017-03-20

    Substantial amounts of public funds are invested in health research worldwide. Publicly funded randomised controlled trials (RCTs) often recruit participants at a slower than anticipated rate. Many trials fail to reach their planned sample size within the envisaged trial timescale and trial funding envelope. To review the consent, recruitment and retention rates for single and multicentre randomised control trials funded and published by the UK's National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme. HTA reports of individually randomised single or multicentre RCTs published from the start of 2004 to the end of April 2016 were reviewed. Information was extracted, relating to the trial characteristics, sample size, recruitment and retention by two independent reviewers. Target sample size and whether it was achieved; recruitment rates (number of participants recruited per centre per month) and retention rates (randomised participants retained and assessed with valid primary outcome data). This review identified 151 individually RCTs from 787 NIHR HTA reports. The final recruitment target sample size was achieved in 56% (85/151) of the RCTs and more than 80% of the final target sample size was achieved for 79% of the RCTs (119/151). The median recruitment rate (participants per centre per month) was found to be 0.92 (IQR 0.43-2.79) and the median retention rate (proportion of participants with valid primary outcome data at follow-up) was estimated at 89% (IQR 79-97%). There is considerable variation in the consent, recruitment and retention rates in publicly funded RCTs. Investigators should bear this in mind at the planning stage of their study and not be overly optimistic about their recruitment projections. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  18. Recruitment and retention of participants in randomised controlled trials: a review of trials funded and published by the United Kingdom Health Technology Assessment Programme

    PubMed Central

    Bonacho dos Anjos Henriques-Cadby, Inês; Bortolami, Oscar; Flight, Laura; Hind, Daniel; Knox, Christopher; Nadin, Ben; Rothwell, Joanne; Surtees, Michael; Julious, Steven A

    2017-01-01

    Background Substantial amounts of public funds are invested in health research worldwide. Publicly funded randomised controlled trials (RCTs) often recruit participants at a slower than anticipated rate. Many trials fail to reach their planned sample size within the envisaged trial timescale and trial funding envelope. Objectives To review the consent, recruitment and retention rates for single and multicentre randomised control trials funded and published by the UK's National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme. Data sources and study selection HTA reports of individually randomised single or multicentre RCTs published from the start of 2004 to the end of April 2016 were reviewed. Data extraction Information was extracted, relating to the trial characteristics, sample size, recruitment and retention by two independent reviewers. Main outcome measures Target sample size and whether it was achieved; recruitment rates (number of participants recruited per centre per month) and retention rates (randomised participants retained and assessed with valid primary outcome data). Results This review identified 151 individually RCTs from 787 NIHR HTA reports. The final recruitment target sample size was achieved in 56% (85/151) of the RCTs and more than 80% of the final target sample size was achieved for 79% of the RCTs (119/151). The median recruitment rate (participants per centre per month) was found to be 0.92 (IQR 0.43–2.79) and the median retention rate (proportion of participants with valid primary outcome data at follow-up) was estimated at 89% (IQR 79–97%). Conclusions There is considerable variation in the consent, recruitment and retention rates in publicly funded RCTs. Investigators should bear this in mind at the planning stage of their study and not be overly optimistic about their recruitment projections. PMID:28320800

  19. Critical appraisal of arguments for the delayed-start design proposed as alternative to the parallel-group randomized clinical trial design in the field of rare disease.

    PubMed

    Spineli, Loukia M; Jenz, Eva; Großhennig, Anika; Koch, Armin

    2017-08-17

    A number of papers have proposed or evaluated the delayed-start design as an alternative to the standard two-arm parallel group randomized clinical trial (RCT) design in the field of rare disease. However the discussion is felt to lack a sufficient degree of consideration devoted to the true virtues of the delayed start design and the implications either in terms of required sample-size, overall information, or interpretation of the estimate in the context of small populations. To evaluate whether there are real advantages of the delayed-start design particularly in terms of overall efficacy and sample size requirements as a proposed alternative to the standard parallel group RCT in the field of rare disease. We used a real-life example to compare the delayed-start design with the standard RCT in terms of sample size requirements. Then, based on three scenarios regarding the development of the treatment effect over time, the advantages, limitations and potential costs of the delayed-start design are discussed. We clarify that delayed-start design is not suitable for drugs that establish an immediate treatment effect, but for drugs with effects developing over time, instead. In addition, the sample size will always increase as an implication for a reduced time on placebo resulting in a decreased treatment effect. A number of papers have repeated well-known arguments to justify the delayed-start design as appropriate alternative to the standard parallel group RCT in the field of rare disease and do not discuss the specific needs of research methodology in this field. The main point is that a limited time on placebo will result in an underestimated treatment effect and, in consequence, in larger sample size requirements compared to those expected under a standard parallel-group design. This also impacts on benefit-risk assessment.

  20. Aerosol Measurements of the Fine and Ultrafine Particle Content of Lunar Regolith

    NASA Technical Reports Server (NTRS)

    Greenberg, Paul S.; Chen, Da-Ren; Smith, Sally A.

    2007-01-01

    We report the first quantitative measurements of the ultrafine (20 to 100 nm) and fine (100 nm to 20 m) particulate components of Lunar surface regolith. The measurements were performed by gas-phase dispersal of the samples, and analysis using aerosol diagnostic techniques. This approach makes no a priori assumptions about the particle size distribution function as required by ensemble optical scattering methods, and is independent of refractive index and density. The method provides direct evaluation of effective transport diameters, in contrast to indirect scattering techniques or size information derived from two-dimensional projections of high magnification-images. The results demonstrate considerable populations in these size regimes. In light of the numerous difficulties attributed to dust exposure during the Apollo program, this outcome is of significant importance to the design of mitigation technologies for future Lunar exploration.

  1. Reference interval computation: which method (not) to choose?

    PubMed

    Pavlov, Igor Y; Wilson, Andrew R; Delgado, Julio C

    2012-07-11

    When different methods are applied to reference interval (RI) calculation the results can sometimes be substantially different, especially for small reference groups. If there are no reliable RI data available, there is no way to confirm which method generates results closest to the true RI. We randomly drawn samples obtained from a public database for 33 markers. For each sample, RIs were calculated by bootstrapping, parametric, and Box-Cox transformed parametric methods. Results were compared to the values of the population RI. For approximately half of the 33 markers, results of all 3 methods were within 3% of the true reference value. For other markers, parametric results were either unavailable or deviated considerably from the true values. The transformed parametric method was more accurate than bootstrapping for sample size of 60, very close to bootstrapping for sample size 120, but in some cases unavailable. We recommend against using parametric calculations to determine RIs. The transformed parametric method utilizing Box-Cox transformation would be preferable way of RI calculation, if it satisfies normality test. If not, the bootstrapping is always available, and is almost as accurate and precise as the transformed parametric method. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Use of the superpopulation approach to estimate breeding population size: An example in asynchronously breeding birds

    USGS Publications Warehouse

    Williams, K.A.; Frederick, P.C.; Nichols, J.D.

    2011-01-01

    Many populations of animals are fluid in both space and time, making estimation of numbers difficult. Much attention has been devoted to estimation of bias in detection of animals that are present at the time of survey. However, an equally important problem is estimation of population size when all animals are not present on all survey occasions. Here, we showcase use of the superpopulation approach to capture-recapture modeling for estimating populations where group membership is asynchronous, and where considerable overlap in group membership among sampling occasions may occur. We estimate total population size of long-legged wading bird (Great Egret and White Ibis) breeding colonies from aerial observations of individually identifiable nests at various times in the nesting season. Initiation and termination of nests were analogous to entry and departure from a population. Estimates using the superpopulation approach were 47-382% larger than peak aerial counts of the same colonies. Our results indicate that the use of the superpopulation approach to model nesting asynchrony provides a considerably less biased and more efficient estimate of nesting activity than traditional methods. We suggest that this approach may also be used to derive population estimates in a variety of situations where group membership is fluid. ?? 2011 by the Ecological Society of America.

  3. Effect of bait and gear type on channel catfish catch and turtle bycatch in a reservoir

    USGS Publications Warehouse

    Cartabiano, Evan C.; Stewart, David R.; Long, James M.

    2014-01-01

    Hoop nets have become the preferred gear choice to sample channel catfish Ictalurus punctatus but the degree of bycatch can be high, especially due to the incidental capture of aquatic turtles. While exclusion and escapement devices have been developed and evaluated, few have examined bait choice as a method to reduce turtle bycatch. The use of Zote™ soap has shown considerable promise to reduce bycatch of aquatic turtles when used with trotlines but its effectiveness in hoop nets has not been evaluated. We sought to determine the effectiveness of hoop nets baited with cheese bait or Zote™ soap and trotlines baited with shad or Zote™ soap as a way to sample channel catfish and prevent capture of aquatic turtles. We used a repeated-measures experimental design and treatment combinations were randomly assigned using a Latin-square arrangement. Eight sampling locations were systematically selected and then sampled with either hoop nets or trotlines using Zote™ soap (both gears), waste cheese (hoop nets), or cut shad (trotlines). Catch rates did not statistically differ among the gear–bait-type combinations. Size bias was evident with trotlines consistently capturing larger sized channel catfish compared to hoop nets. Results from a Monte Carlo bootstrapping procedure estimated the number of samples needed to reach predetermined levels of sampling precision to be lowest for trotlines baited with soap. Moreover, trotlines baited with soap caught no aquatic turtles, while hoop nets captured many turtles and had high mortality rates. We suggest that Zote™ soap used in combination with multiple hook sizes on trotlines may be a viable alternative to sample channel catfish and reduce bycatch of aquatic turtles.

  4. Structural and Morphological Evaluation of Nano-Sized MoSi2 Powder Produced by Mechanical Milling

    NASA Astrophysics Data System (ADS)

    Sameezadeh, Mahmood; Farhangi, Hassan; Emamy, Masoud

    Nano-sized intermetallic powders have received great attention owing to their property advantages over conventional micro-sized counterparts. In the present study nano-sized MoSi2 powder has been produced successfully from commercially available MoSi2 (3 μm) by a mechanical milling process carried out for a period of 100 hours. The effects of milling time on size and morphology of the powders were studied by SEM and TEM and image analyzing system. The results indicate that the as-received micrometric powder with a wide size distribution of irregular shaped morphology changes to a narrow size distribution of nearly equiaxed particles with the progress of attrition milling up to 100 h, reaching an average particle size of 71 nm. Structural evolution of milled samples was characterized by XRD to determine the crystallite size and lattice microstrain using Williamson-Hall method. According to the results, the crystallite size of the powders decreases continuously down to 23 nm with increasing milling time up to 100 h and this size refinement is more rapid at the early stages of the milling process. On the other hand, the lattice strain increases considerably with milling up to 65 h and further milling causes no significant changes of lattice strain.

  5. Interpretation of correlations in clinical research.

    PubMed

    Hung, Man; Bounsanga, Jerry; Voss, Maren Wright

    2017-11-01

    Critically analyzing research is a key skill in evidence-based practice and requires knowledge of research methods, results interpretation, and applications, all of which rely on a foundation based in statistics. Evidence-based practice makes high demands on trained medical professionals to interpret an ever-expanding array of research evidence. As clinical training emphasizes medical care rather than statistics, it is useful to review the basics of statistical methods and what they mean for interpreting clinical studies. We reviewed the basic concepts of correlational associations, violations of normality, unobserved variable bias, sample size, and alpha inflation. The foundations of causal inference were discussed and sound statistical analyses were examined. We discuss four ways in which correlational analysis is misused, including causal inference overreach, over-reliance on significance, alpha inflation, and sample size bias. Recent published studies in the medical field provide evidence of causal assertion overreach drawn from correlational findings. The findings present a primer on the assumptions and nature of correlational methods of analysis and urge clinicians to exercise appropriate caution as they critically analyze the evidence before them and evaluate evidence that supports practice. Critically analyzing new evidence requires statistical knowledge in addition to clinical knowledge. Studies can overstate relationships, expressing causal assertions when only correlational evidence is available. Failure to account for the effect of sample size in the analyses tends to overstate the importance of predictive variables. It is important not to overemphasize the statistical significance without consideration of effect size and whether differences could be considered clinically meaningful.

  6. Combining the role of convenience and consideration set size in explaining fish consumption in Norway.

    PubMed

    Rortveit, Asbjorn Warvik; Olsen, Svein Ottar

    2009-04-01

    The purpose of this study is to explore how convenience orientation, perceived product inconvenience and consideration set size are related to attitudes towards fish and fish consumption. The authors present a structural equation model (SEM) based on the integration of two previous studies. The results of a SEM analysis using Lisrel 8.72 on data from a Norwegian consumer survey (n=1630) suggest that convenience orientation and perceived product inconvenience have a negative effect on both consideration set size and consumption frequency. Attitude towards fish has the greatest impact on consumption frequency. The results also indicate that perceived product inconvenience is a key variable since it has a significant impact on attitude, and on consideration set size and consumption frequency. Further, the analyses confirm earlier findings suggesting that the effect of convenience orientation on consumption is partially mediated through perceived product inconvenience. The study also confirms earlier findings suggesting that the consideration set size affects consumption frequency. Practical implications drawn from this research are that the seafood industry would benefit from developing and positioning products that change beliefs about fish as an inconvenient product. Future research for other food categories should be done to enhance the external validity.

  7. The impact of sample non-normality on ANOVA and alternative methods.

    PubMed

    Lantz, Björn

    2013-05-01

    In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal. © 2012 The British Psychological Society.

  8. Surface ocean metabarcoding confirms limited diversity in planktonic foraminifera but reveals unknown hyper-abundant lineages.

    PubMed

    Morard, Raphaël; Garet-Delmas, Marie-José; Mahé, Frédéric; Romac, Sarah; Poulain, Julie; Kucera, Michal; de Vargas, Colomban

    2018-02-07

    Since the advent of DNA metabarcoding surveys, the planktonic realm is considered a treasure trove of diversity, inhabited by a small number of abundant taxa, and a hugely diverse and taxonomically uncharacterized consortium of rare species. Here we assess if the apparent underestimation of plankton diversity applies universally. We target planktonic foraminifera, a group of protists whose known morphological diversity is limited, taxonomically resolved and linked to ribosomal DNA barcodes. We generated a pyrosequencing dataset of ~100,000 partial 18S rRNA foraminiferal sequences from 32 size fractioned photic-zone plankton samples collected at 8 stations in the Indian and Atlantic Oceans during the Tara Oceans expedition (2009-2012). We identified 69 genetic types belonging to 41 morphotaxa in our metabarcoding dataset. The diversity saturated at local and regional scale as well as in the three size fractions and the two depths sampled indicating that the diversity of foraminifera is modest and finite. The large majority of the newly discovered lineages occur in the small size fraction, neglected by classical taxonomy. These unknown lineages dominate the bulk [>0.8 µm] size fraction, implying that a considerable part of the planktonic foraminifera community biomass has its origin in unknown lineages.

  9. Indomethacin nanocrystals prepared by different laboratory scale methods: effect on crystalline form and dissolution behavior

    NASA Astrophysics Data System (ADS)

    Martena, Valentina; Censi, Roberta; Hoti, Ela; Malaj, Ledjan; Di Martino, Piera

    2012-12-01

    The objective of this study is to select very simple and well-known laboratory scale methods able to reduce particle size of indomethacin until the nanometric scale. The effect on the crystalline form and the dissolution behavior of the different samples was deliberately evaluated in absence of any surfactants as stabilizers. Nanocrystals of indomethacin (native crystals are in the γ form) (IDM) were obtained by three laboratory scale methods: A (Batch A: crystallization by solvent evaporation in a nano-spray dryer), B (Batch B-15 and B-30: wet milling and lyophilization), and C (Batch C-20-N and C-40-N: Cryo-milling in the presence of liquid nitrogen). Nanocrystals obtained by the method A (Batch A) crystallized into a mixture of α and γ polymorphic forms. IDM obtained by the two other methods remained in the γ form and a different attitude to the crystallinity decrease were observed, with a more considerable decrease in crystalline degree for IDM milled for 40 min in the presence of liquid nitrogen. The intrinsic dissolution rate (IDR) revealed a higher dissolution rate for Batches A and C-40-N, due to the higher IDR of α form than γ form for the Batch A, and the lower crystallinity degree for both the Batches A and C-40-N. These factors, as well as the decrease in particle size, influenced the IDM dissolution rate from the particle samples. Modifications in the solid physical state that may occur using different particle size reduction treatments have to be taken into consideration during the scale up and industrial development of new solid dosage forms.

  10. Size effects in olivine control strength in low-temperature plasticity regime

    NASA Astrophysics Data System (ADS)

    Kumamoto, K. M.; Thom, C.; Wallis, D.; Hansen, L. N.; Armstrong, D. E. J.; Goldsby, D. L.; Warren, J. M.; Wilkinson, A. J.

    2017-12-01

    The strength of the lithospheric mantle during deformation by low-temperature plasticity controls a range of geological phenomena, including lithospheric-scale strain localization, the evolution of friction on deep seismogenic faults, and the flexure of tectonic plates. However, constraints on the strength of olivine in this deformation regime are difficult to obtain from conventional rock-deformation experiments, and previous results vary considerably. We demonstrate via nanoindentation that the strength of olivine in the low-temperature plasticity regime is dependent on the length-scale of the test, with experiments on smaller volumes of material exhibiting larger yield stresses. This "size effect" has previously been explained in engineering materials as a result of the role of strain gradients and associated geometrically necessary dislocations in modifying plastic behavior. The Hall-Petch effect, in which a material with a small grain size exhibits a higher strength than one with a large grain size, is thought to arise from the same mechanism. The presence of a size effect resolves discrepancies among previous experimental measurements of olivine, which were either conducted using indentation methods or were conducted on polycrystalline samples with small grain sizes. An analysis of different low-temperature plasticity flow laws extrapolated to room temperature reveals a power-law relationship between length-scale (grain size for polycrystalline deformation and contact radius for indentation tests) and yield strength. This suggests that data from samples with large inherent length scales best represent the plastic strength of the coarse-grained lithospheric mantle. Additionally, the plastic deformation of nanometer- to micrometer-sized asperities on fault surfaces may control the evolution of fault roughness due to their size-dependent strength.

  11. The relation between statistical power and inference in fMRI

    PubMed Central

    Wager, Tor D.; Yarkoni, Tal

    2017-01-01

    Statistically underpowered studies can result in experimental failure even when all other experimental considerations have been addressed impeccably. In fMRI the combination of a large number of dependent variables, a relatively small number of observations (subjects), and a need to correct for multiple comparisons can decrease statistical power dramatically. This problem has been clearly addressed yet remains controversial—especially in regards to the expected effect sizes in fMRI, and especially for between-subjects effects such as group comparisons and brain-behavior correlations. We aimed to clarify the power problem by considering and contrasting two simulated scenarios of such possible brain-behavior correlations: weak diffuse effects and strong localized effects. Sampling from these scenarios shows that, particularly in the weak diffuse scenario, common sample sizes (n = 20–30) display extremely low statistical power, poorly represent the actual effects in the full sample, and show large variation on subsequent replications. Empirical data from the Human Connectome Project resembles the weak diffuse scenario much more than the localized strong scenario, which underscores the extent of the power problem for many studies. Possible solutions to the power problem include increasing the sample size, using less stringent thresholds, or focusing on a region-of-interest. However, these approaches are not always feasible and some have major drawbacks. The most prominent solutions that may help address the power problem include model-based (multivariate) prediction methods and meta-analyses with related synthesis-oriented approaches. PMID:29155843

  12. Evaluating multi-level models to test occupancy state responses of Plethodontid salamanders

    USGS Publications Warehouse

    Kroll, Andrew J.; Garcia, Tiffany S.; Jones, Jay E.; Dugger, Catherine; Murden, Blake; Johnson, Josh; Peerman, Summer; Brintz, Ben; Rochelle, Michael

    2015-01-01

    Plethodontid salamanders are diverse and widely distributed taxa and play critical roles in ecosystem processes. Due to salamander use of structurally complex habitats, and because only a portion of a population is available for sampling, evaluation of sampling designs and estimators is critical to provide strong inference about Plethodontid ecology and responses to conservation and management activities. We conducted a simulation study to evaluate the effectiveness of multi-scale and hierarchical single-scale occupancy models in the context of a Before-After Control-Impact (BACI) experimental design with multiple levels of sampling. Also, we fit the hierarchical single-scale model to empirical data collected for Oregon slender and Ensatina salamanders across two years on 66 forest stands in the Cascade Range, Oregon, USA. All models were fit within a Bayesian framework. Estimator precision in both models improved with increasing numbers of primary and secondary sampling units, underscoring the potential gains accrued when adding secondary sampling units. Both models showed evidence of estimator bias at low detection probabilities and low sample sizes; this problem was particularly acute for the multi-scale model. Our results suggested that sufficient sample sizes at both the primary and secondary sampling levels could ameliorate this issue. Empirical data indicated Oregon slender salamander occupancy was associated strongly with the amount of coarse woody debris (posterior mean = 0.74; SD = 0.24); Ensatina occupancy was not associated with amount of coarse woody debris (posterior mean = -0.01; SD = 0.29). Our simulation results indicate that either model is suitable for use in an experimental study of Plethodontid salamanders provided that sample sizes are sufficiently large. However, hierarchical single-scale and multi-scale models describe different processes and estimate different parameters. As a result, we recommend careful consideration of study questions and objectives prior to sampling data and fitting models.

  13. Speciation and leachability of copper in mine tailings from porphyry copper mining: influence of particle size.

    PubMed

    Hansen, Henrik K; Yianatos, Juan B; Ottosen, Lisbeth M

    2005-09-01

    Mine tailing from the El Teniente-Codelco copper mine situated in VI Region of Chile was analysed in order to evaluate the mobility and speciation of copper in the solid material. Mine tailing was sampled after the rougher flotation circuits, and the copper content was measured to 1150 mg kg (-1) dry matter. This tailing was segmented into fractions of different size intervals: 0-38, 38-45, 45-53, 53-75, 75-106, 106-150, 150-212, and >212 microm, respectively. Copper content determination, sequential chemical extraction, and desorption experiments were carried out for each size interval in order to evaluate the speciation of copper. It was found that the particles of smallest size contained 50-60% weak acid leachable copper, whereas only 32% of the copper found in largest particles could be leached in weak acid. Copper oxides and carbonates were the dominating species in the smaller particles, and the larger particles contained considerable amounts of sulphides.

  14. Methodological issues with adaptation of clinical trial design.

    PubMed

    Hung, H M James; Wang, Sue-Jane; O'Neill, Robert T

    2006-01-01

    Adaptation of clinical trial design generates many issues that have not been resolved for practical applications, though statistical methodology has advanced greatly. This paper focuses on some methodological issues. In one type of adaptation such as sample size re-estimation, only the postulated value of a parameter for planning the trial size may be altered. In another type, the originally intended hypothesis for testing may be modified using the internal data accumulated at an interim time of the trial, such as changing the primary endpoint and dropping a treatment arm. For sample size re-estimation, we make a contrast between an adaptive test weighting the two-stage test statistics with the statistical information given by the original design and the original sample mean test with a properly corrected critical value. We point out the difficulty in planning a confirmatory trial based on the crude information generated by exploratory trials. In regards to selecting a primary endpoint, we argue that the selection process that allows switching from one endpoint to the other with the internal data of the trial is not very likely to gain a power advantage over the simple process of selecting one from the two endpoints by testing them with an equal split of alpha (Bonferroni adjustment). For dropping a treatment arm, distributing the remaining sample size of the discontinued arm to other treatment arms can substantially improve the statistical power of identifying a superior treatment arm in the design. A common difficult methodological issue is that of how to select an adaptation rule in the trial planning stage. Pre-specification of the adaptation rule is important for the practicality consideration. Changing the originally intended hypothesis for testing with the internal data generates great concerns to clinical trial researchers.

  15. Fully automatic characterization and data collection from crystals of biological macromolecules.

    PubMed

    Svensson, Olof; Malbet-Monaco, Stéphanie; Popov, Alexander; Nurizzo, Didier; Bowler, Matthew W

    2015-08-01

    Considerable effort is dedicated to evaluating macromolecular crystals at synchrotron sources, even for well established and robust systems. Much of this work is repetitive, and the time spent could be better invested in the interpretation of the results. In order to decrease the need for manual intervention in the most repetitive steps of structural biology projects, initial screening and data collection, a fully automatic system has been developed to mount, locate, centre to the optimal diffraction volume, characterize and, if possible, collect data from multiple cryocooled crystals. Using the capabilities of pixel-array detectors, the system is as fast as a human operator, taking an average of 6 min per sample depending on the sample size and the level of characterization required. Using a fast X-ray-based routine, samples are located and centred systematically at the position of highest diffraction signal and important parameters for sample characterization, such as flux, beam size and crystal volume, are automatically taken into account, ensuring the calculation of optimal data-collection strategies. The system is now in operation at the new ESRF beamline MASSIF-1 and has been used by both industrial and academic users for many different sample types, including crystals of less than 20 µm in the smallest dimension. To date, over 8000 samples have been evaluated on MASSIF-1 without any human intervention.

  16. Fracture surface analysis of a quenched (α+β)-metastable titanium alloy

    NASA Astrophysics Data System (ADS)

    Illarionov, A. G.; Stepanov, S. I.; Demakov, S. L.

    2017-12-01

    Fracture surface analysis is conducted by means of SEM for VT16 titanium alloy specimens solution-treated at temperatures ranging from 700 to 875 °C, water-quenched and subjected to tensile testing. A cup and cone shape failure and dimple microstructure of the fracture surface indicates the ductile behavior of the alloy. Dimple dimensions correlated with the β-grain size of the alloy in quenched condition. The fracture area (namely, the size; the cup and cone shape) depends on the volume fraction of the primary α-phase in the quenched sample. However, the fracture surface changes considerably when the strain-induced β-αʺ-transformation takes place during tensile testing, resulting in the increase of alloy ductility.

  17. Frequency of multiple paternity in the spiny dogfish Squalus acanthias in the western north Atlantic.

    PubMed

    Veríssimo, Ana; Grubbs, Dean; McDowell, Jan; Musick, John; Portnoy, David

    2011-01-01

    Multiple paternity (MP) has been shown to be widespread in elasmobranch fishes although its prevalence and the number of sires per litter vary considerably among species. In the squaloid shark Squalus acanthias, MP has been reported, but whether it is a common feature of the species' reproductive strategy is unknown. In this study, we determined the frequency of MP in 29 litters of S. acanthias sampled from the lower Chesapeake Bay and coastal Virginia waters, using 7 highly polymorphic nuclear DNA microsatellite loci. Only 5 litters (17% of the total) were genetically polyandrous, with at least 2 sires per litter. Litter size increased with female size but was similar between polyandrous and monandrous females.

  18. Metastable phase formation in the Au-Si system via ultrafast nanocalorimetry

    NASA Astrophysics Data System (ADS)

    Zhang, M.; Wen, J. G.; Efremov, M. Y.; Olson, E. A.; Zhang, Z. S.; Hu, L.; de la Rama, L. P.; Kummamuru, R.; Kavanagh, K. L.; Ma, Z.; Allen, L. H.

    2012-05-01

    We have investigated the stability and solidification of nanometer size Au-Si droplets using an ultrafast heating/cooling nanocalorimetry and in situ growth techniques. The liquid can be supercooled to very low temperatures for both Au-rich (ΔT ˜ 95 K) and Si-rich (ΔT ˜ 220 K) samples. Solidification of a unique metastable phase δ1 is observed with a composition of 74 ± 4 at. % Au and a b-centered orthorhombic structure (a = 0.92, b = 0.72, and c = 1.35 nm; body-center in the a-c plane), which grows heteroepitaxially to Aus. Its melting temperature Tm is 305 ± 5 °C. There is competition during formation between the eutectic and δ1 phases but δ1 is the only metastable alloy observed. For small size droplets, both the δ1 and eutectic phases show considerable depression of the melting point (size-dependent melting).

  19. Sizing-tube-fin space radiators

    NASA Technical Reports Server (NTRS)

    Peoples, J. A.

    1978-01-01

    Temperature and size considerations of the tube fin space radiator were characterized by charts and equations. An approach of accurately assessing rejection capability commensurate with a phase A/B level output is reviewed. A computer program, based on Mackey's equations, is also presented which sizes the rejection area for a given thermal load. The program also handles the flow and thermal considerations of the film coefficient.

  20. Enrichment and distribution of 24 elements within the sub-sieve particle size distribution ranges of fly ash from wastes incinerator plants.

    PubMed

    Raclavská, Helena; Corsaro, Agnieszka; Hartmann-Koval, Silvie; Juchelková, Dagmar

    2017-12-01

    The management of an increasing amount of municipal waste via incineration has been gaining traction. Fly ash as a by-product of incineration of municipal solid waste is considered a hazardous waste due to the elevated content of various elements. The enrichment and distribution of 24 elements in fly ash from three wastes incinerators were evaluated. Two coarse (>100 μm and <100 μm) and five sub-sieve (12-16, 16-23, 23-34, 34-49, and 49-100 μm) particle size fractions separated on a cyclosizer system were analyzed. An enhancement in the enrichment factor was observed in all samples for the majority of elements in >100 μm range compared with <100 μm range. The enrichment factor of individual elements varied considerably within the samples as well as the sub-sieve particle size ranges. These variations were attributed primarily to: (i) the vaporization and condensation mechanisms, (ii) the different design of incineration plants, (iii) incineration properties, (iv) the type of material being incinerated, and (v) the affinity of elements. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Detecting trends in tree growth: not so simple.

    PubMed

    Bowman, David M J S; Brienen, Roel J W; Gloor, Emanuel; Phillips, Oliver L; Prior, Lynda D

    2013-01-01

    Tree biomass influences biogeochemical cycles, climate, and biodiversity across local to global scales. Understanding the environmental control of tree biomass demands consideration of the drivers of individual tree growth over their lifespan. This can be achieved by studies of tree growth in permanent sample plots (prospective studies) and tree ring analyses (retrospective studies). However, identification of growth trends and attribution of their drivers demands statistical control of the axiomatic co-variation of tree size and age, and avoiding sampling biases at the stand, forest, and regional scales. Tracking and predicting the effects of environmental change on tree biomass requires well-designed studies that address the issues that we have reviewed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Hydrodynamic Electron Flow and Hall Viscosity

    NASA Astrophysics Data System (ADS)

    Scaffidi, Thomas; Nandi, Nabhanila; Schmidt, Burkhard; Mackenzie, Andrew P.; Moore, Joel E.

    2017-06-01

    In metallic samples of small enough size and sufficiently strong momentum-conserving scattering, the viscosity of the electron gas can become the dominant process governing transport. In this regime, momentum is a long-lived quantity whose evolution is described by an emergent hydrodynamical theory. Furthermore, breaking time-reversal symmetry leads to the appearance of an odd component to the viscosity called the Hall viscosity, which has attracted considerable attention recently due to its quantized nature in gapped systems but still eludes experimental confirmation. Based on microscopic calculations, we discuss how to measure the effects of both the even and odd components of the viscosity using hydrodynamic electronic transport in mesoscopic samples under applied magnetic fields.

  3. Hydrodynamic Electron Flow and Hall Viscosity.

    PubMed

    Scaffidi, Thomas; Nandi, Nabhanila; Schmidt, Burkhard; Mackenzie, Andrew P; Moore, Joel E

    2017-06-02

    In metallic samples of small enough size and sufficiently strong momentum-conserving scattering, the viscosity of the electron gas can become the dominant process governing transport. In this regime, momentum is a long-lived quantity whose evolution is described by an emergent hydrodynamical theory. Furthermore, breaking time-reversal symmetry leads to the appearance of an odd component to the viscosity called the Hall viscosity, which has attracted considerable attention recently due to its quantized nature in gapped systems but still eludes experimental confirmation. Based on microscopic calculations, we discuss how to measure the effects of both the even and odd components of the viscosity using hydrodynamic electronic transport in mesoscopic samples under applied magnetic fields.

  4. Statistical grand rounds: a review of analysis and sample size calculation considerations for Wilcoxon tests.

    PubMed

    Divine, George; Norton, H James; Hunt, Ronald; Dienemann, Jacqueline

    2013-09-01

    When a study uses an ordinal outcome measure with unknown differences in the anchors and a small range such as 4 or 7, use of the Wilcoxon rank sum test or the Wilcoxon signed rank test may be most appropriate. However, because nonparametric methods are at best indirect functions of standard measures of location such as means or medians, the choice of the most appropriate summary measure can be difficult. The issues underlying use of these tests are discussed. The Wilcoxon-Mann-Whitney odds directly reflects the quantity that the rank sum procedure actually tests, and thus it can be a superior summary measure. Unlike the means and medians, its value will have a one-to-one correspondence with the Wilcoxon rank sum test result. The companion article appearing in this issue of Anesthesia & Analgesia ("Aromatherapy as Treatment for Postoperative Nausea: A Randomized Trial") illustrates these issues and provides an example of a situation for which the medians imply no difference between 2 groups, even though the groups are, in fact, quite different. The trial cited also provides an example of a single sample that has a median of zero, yet there is a substantial shift for much of the nonzero data, and the Wilcoxon signed rank test is quite significant. These examples highlight the potential discordance between medians and Wilcoxon test results. Along with the issues surrounding the choice of a summary measure, there are considerations for the computation of sample size and power, confidence intervals, and multiple comparison adjustment. In addition, despite the increased robustness of the Wilcoxon procedures relative to parametric tests, some circumstances in which the Wilcoxon tests may perform poorly are noted, along with alternative versions of the procedures that correct for such limitations. 

  5. Silage Collected from Dairy Farms Harbors an Abundance of Listeriaphages with Considerable Host Range and Genome Size Diversity

    PubMed Central

    Vongkamjan, Kitiya; Switt, Andrea Moreno; den Bakker, Henk C.; Fortes, Esther D.

    2012-01-01

    Since the food-borne pathogen Listeria monocytogenes is common in dairy farm environments, it is likely that phages infecting this bacterium (“listeriaphages”) are abundant on dairy farms. To better understand the ecology and diversity of listeriaphages on dairy farms and to develop a diverse phage collection for further studies, silage samples collected on two dairy farms were screened for L. monocytogenes and listeriaphages. While only 4.5% of silage samples tested positive for L. monocytogenes, 47.8% of samples were positive for listeriaphages, containing up to >1.5 × 104 PFU/g. Host range characterization of the 114 phage isolates obtained, with a reference set of 13 L. monocytogenes strains representing the nine major serotypes and four lineages, revealed considerable host range diversity; phage isolates were classified into nine lysis groups. While one serotype 3c strain was not lysed by any phage isolates, serotype 4 strains were highly susceptible to phages and were lysed by 63.2 to 88.6% of phages tested. Overall, 12.3% of phage isolates showed a narrow host range (lysing 1 to 5 strains), while 28.9% of phages represented broad host range (lysing ≥11 strains). Genome sizes of the phage isolates were estimated to range from approximately 26 to 140 kb. The extensive host range and genomic diversity of phages observed here suggest an important role of phages in the ecology of L. monocytogenes on dairy farms. In addition, the phage collection developed here has the potential to facilitate further development of phage-based biocontrol strategies (e.g., in silage) and other phage-based tools. PMID:23042180

  6. Can mindfulness-based interventions influence cognitive functioning in older adults? A review and considerations for future research.

    PubMed

    Berk, Lotte; van Boxtel, Martin; van Os, Jim

    2017-11-01

    An increased need exists to examine factors that protect against age-related cognitive decline. There is preliminary evidence that meditation can improve cognitive function. However, most studies are cross-sectional and examine a wide variety of meditation techniques. This review focuses on the standard eight-week mindfulness-based interventions (MBIs) such as mindfulness-based stress reduction (MBSR) and mindfulness-based cognitive therapy (MBCT). We searched the PsychINFO, CINAHL, Web of Science, COCHRANE, and PubMed databases to identify original studies investigating the effects of MBI on cognition in older adults. Six reports were included in the review of which three were randomized controlled trials. Studies reported preliminary positive effects on memory, executive function and processing speed. However, most reports had a high risk of bias and sample sizes were small. The only study with low risk of bias, large sample size and active control group reported no significant findings. We conclude that eight-week MBI for older adults are feasible, but results on cognitive improvement are inconclusive due a limited number of studies, small sample sizes, and a high risk of bias. Rather than a narrow focus on cognitive training per se, future research may productively shift to investigate MBI as a tool to alleviate suffering in older adults, and to prevent cognitive problems in later life already in younger target populations.

  7. Power analysis to detect treatment effects in longitudinal clinical trials for Alzheimer's disease.

    PubMed

    Huang, Zhiyue; Muniz-Terrera, Graciela; Tom, Brian D M

    2017-09-01

    Assessing cognitive and functional changes at the early stage of Alzheimer's disease (AD) and detecting treatment effects in clinical trials for early AD are challenging. Under the assumption that transformed versions of the Mini-Mental State Examination, the Clinical Dementia Rating Scale-Sum of Boxes, and the Alzheimer's Disease Assessment Scale-Cognitive Subscale tests'/components' scores are from a multivariate linear mixed-effects model, we calculated the sample sizes required to detect treatment effects on the annual rates of change in these three components in clinical trials for participants with mild cognitive impairment. Our results suggest that a large number of participants would be required to detect a clinically meaningful treatment effect in a population with preclinical or prodromal Alzheimer's disease. We found that the transformed Mini-Mental State Examination is more sensitive for detecting treatment effects in early AD than the transformed Clinical Dementia Rating Scale-Sum of Boxes and Alzheimer's Disease Assessment Scale-Cognitive Subscale. The use of optimal weights to construct powerful test statistics or sensitive composite scores/endpoints can reduce the required sample sizes needed for clinical trials. Consideration of the multivariate/joint distribution of components' scores rather than the distribution of a single composite score when designing clinical trials can lead to an increase in power and reduced sample sizes for detecting treatment effects in clinical trials for early AD.

  8. The efficacy of respondent-driven sampling for the health assessment of minority populations.

    PubMed

    Badowski, Grazyna; Somera, Lilnabeth P; Simsiman, Brayan; Lee, Hye-Ryeon; Cassel, Kevin; Yamanaka, Alisha; Ren, JunHao

    2017-10-01

    Respondent driven sampling (RDS) is a relatively new network sampling technique typically employed for hard-to-reach populations. Like snowball sampling, initial respondents or "seeds" recruit additional respondents from their network of friends. Under certain assumptions, the method promises to produce a sample independent from the biases that may have been introduced by the non-random choice of "seeds." We conducted a survey on health communication in Guam's general population using the RDS method, the first survey that has utilized this methodology in Guam. It was conducted in hopes of identifying a cost-efficient non-probability sampling strategy that could generate reasonable population estimates for both minority and general populations. RDS data was collected in Guam in 2013 (n=511) and population estimates were compared with 2012 BRFSS data (n=2031) and the 2010 census data. The estimates were calculated using the unweighted RDS sample and the weighted sample using RDS inference methods and compared with known population characteristics. The sample size was reached in 23days, providing evidence that the RDS method is a viable, cost-effective data collection method, which can provide reasonable population estimates. However, the results also suggest that the RDS inference methods used to reduce bias, based on self-reported estimates of network sizes, may not always work. Caution is needed when interpreting RDS study findings. For a more diverse sample, data collection should not be conducted in just one location. Fewer questions about network estimates should be asked, and more careful consideration should be given to the kind of incentives offered to participants. Copyright © 2017. Published by Elsevier Ltd.

  9. Macrostructure from Microstructure: Generating Whole Systems from Ego Networks

    PubMed Central

    Smith, Jeffrey A.

    2014-01-01

    This paper presents a new simulation method to make global network inference from sampled data. The proposed simulation method takes sampled ego network data and uses Exponential Random Graph Models (ERGM) to reconstruct the features of the true, unknown network. After describing the method, the paper presents two validity checks of the approach: the first uses the 20 largest Add Health networks while the second uses the Sociology Coauthorship network in the 1990's. For each test, I take random ego network samples from the known networks and use my method to make global network inference. I find that my method successfully reproduces the properties of the networks, such as distance and main component size. The results also suggest that simpler, baseline models provide considerably worse estimates for most network properties. I end the paper by discussing the bounds/limitations of ego network sampling. I also discuss possible extensions to the proposed approach. PMID:25339783

  10. Comparative analysis of laparoscopic and ultrasound-guided biopsy methods for gene expression analysis in transgenic goats.

    PubMed

    Melo, C H; Sousa, F C; Batista, R I P T; Sanchez, D J D; Souza-Fabjan, J M G; Freitas, V J F; Melo, L M; Teixeira, D I A

    2015-07-31

    The present study aimed to compare laparoscopic (LP) and ultrasound-guided (US) biopsy methods to obtain either liver or splenic tissue samples for ectopic gene expression analysis in transgenic goats. Tissue samples were collected from human granulocyte colony stimulating factor (hG-CSF)-transgenic bucks and submitted to real-time PCR for the endogenous genes (Sp1, Baff, and Gapdh) and the transgene (hG-CSF). Both LP and US biopsy methods were successful in obtaining liver and splenic samples that could be analyzed by PCR (i.e., sufficient sample sizes and RNA yield were obtained). Although the number of attempts made to obtain the tissue samples was similar (P > 0.05), LP procedures took considerably longer than the US method (P = 0.03). Finally, transgene transcripts were not detected in spleen or liver samples. Thus, for the phenotypic characterization of a transgenic goat line, investigation of ectopic gene expression can be made successfully by LP or US biopsy, avoiding the traditional approach of euthanasia.

  11. Reducing uncertainty in dust monitoring to detect aeolian sediment transport responses to land cover change

    NASA Astrophysics Data System (ADS)

    Webb, N.; Chappell, A.; Van Zee, J.; Toledo, D.; Duniway, M.; Billings, B.; Tedela, N.

    2017-12-01

    Anthropogenic land use and land cover change (LULCC) influence global rates of wind erosion and dust emission, yet our understanding of the magnitude of the responses remains poor. Field measurements and monitoring provide essential data to resolve aeolian sediment transport patterns and assess the impacts of human land use and management intensity. Data collected in the field are also required for dust model calibration and testing, as models have become the primary tool for assessing LULCC-dust cycle interactions. However, there is considerable uncertainty in estimates of dust emission due to the spatial variability of sediment transport. Field sampling designs are currently rudimentary and considerable opportunities are available to reduce the uncertainty. Establishing the minimum detectable change is critical for measuring spatial and temporal patterns of sediment transport, detecting potential impacts of LULCC and land management, and for quantifying the uncertainty of dust model estimates. Here, we evaluate the effectiveness of common sampling designs (e.g., simple random sampling, systematic sampling) used to measure and monitor aeolian sediment transport rates. Using data from the US National Wind Erosion Research Network across diverse rangeland and cropland cover types, we demonstrate how only large changes in sediment mass flux (of the order 200% to 800%) can be detected when small sample sizes are used, crude sampling designs are implemented, or when the spatial variation is large. We then show how statistical rigour and the straightforward application of a sampling design can reduce the uncertainty and detect change in sediment transport over time and between land use and land cover types.

  12. Aqueous phase hydrogenation of phenol catalyzed by Pd and PdAg on ZrO 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Resende, Karen A.; Hori, Carla E.; Noronha, Fabio B.

    Hydrogenation of phenol in aqueous phase was studied over a series of ZrO2-supported Pd catalysts in order to explore the effects of particle size and of Ag addition on the activity of Pd. Kinetic assessments were performed in a batch reactor, on monometallic Pd/ZrO2 samples with different Pd loadings (0.5%, 1% and 2%), as well as on a 1% PdAg/ZrO2 sample. The turnover frequency (TOF) increases with the Pd particle size. The reaction orders in phenol and H2 indicate that the surface coverages by phenol, H2 and their derived intermediates are higher on 0.5% Pd/ZrO2 than on other samples. Themore » activation energy was the lowest on the least active sample (0.5% Pd/ZrO2), while being identical on 1% and 2% Pd/ZrO2 catalysts. Thus, the significantly lower activity of the small Pd particles (1-2 nm on average) in 0.5%Pd/ZrO2 is explained by the unfavorable activation entropies for the strongly bound species. The presence of Ag increases considerably the TOF of the reaction by decreasing the Ea and increasing the coverages of phenol and H2.« less

  13. Effect of wire size on maxillary arch force/couple systems for a simulated high canine malocclusion.

    PubMed

    Major, Paul W; Toogood, Roger W; Badawi, Hisham M; Carey, Jason P; Seru, Surbhi

    2014-12-01

    To better understand the effects of copper nickel titanium (CuNiTi) archwire size on bracket-archwire mechanics through the analysis of force/couple distributions along the maxillary arch. The hypothesis is that wire size is linearly related to the forces and moments produced along the arch. An Orthodontic Simulator was utilized to study a simplified high canine malocclusion. Force/couple distributions produced by passive and elastic ligation using two wire sizes (Damon 0.014 and 0.018 inch) measured with a sample size of 144. The distribution and variation in force/couple loading around the arch is a complicated function of wire size. The use of a thicker wire increases the force/couple magnitudes regardless of ligation method. Owing to the non-linear material behaviour of CuNiTi, this increase is less than would occur based on linear theory as would apply for stainless steel wires. The results demonstrate that an increase in wire size does not result in a proportional increase of applied force/moment. This discrepancy is explained in terms of the non-linear properties of CuNiTi wires. This non-proportional force response in relation to increased wire size warrants careful consideration when selecting wires in a clinical setting. © 2014 British Orthodontic Society.

  14. Study into the correlation of dominant pore throat size and SIP relaxation frequency

    NASA Astrophysics Data System (ADS)

    Kruschwitz, Sabine; Prinz, Carsten; Zimathies, Annett

    2016-12-01

    There is currently a debate within the SIP community about the characteristic textural length scale controlling relaxation time of consolidated porous media. One idea is that the relaxation time is dominated by the pore throat size distribution or more specifically the modal pore throat size as determined in mercury intrusion capillary pressure tests. Recently new studies on inverting pore size distributions from SIP data were published implying that the relaxation mechanisms and controlling length scale are well understood. In contrast new analytical model studies based on the Marshall-Madden membrane polarization theory suggested that two relaxation processes might compete: the one along the short narrow pore (the throat) with one across the wider pore in case the narrow pores become relatively long. This paper presents a first systematically focused study into the relationship of pore throat sizes and SIP relaxation times. The generality of predicted trends is investigated across a wide range of materials differing considerably in chemical composition, specific surface and pore space characteristics. Three different groups of relaxation behaviors can be clearly distinguished. The different behaviors are related to clay content and type, carbonate content, size of the grains and the wide pores in the samples.

  15. Optical and size characterization of dissolved organic matter from the lower Yukon River

    NASA Astrophysics Data System (ADS)

    Guo, L.; Lin, H.

    2017-12-01

    The Arctic rivers have experienced significant climate and environmental changes over the last several decades and their export fluxes and environmental fate of dissolved organic matter (DOM) have received considerable attention. Monthly or bimonthly water samples were collected from the Yukon River, one of the Arctic rivers, between July 2004 and September 2005 for size fractionation to isolate low-molecular-weight (LMW, <1 kDa) and high-molecular-weight (HMW, >1 kDa) DOM. The freeze-dried HMW-DOM was then characterized for their optical properties using fluorescence spectroscopy and colloidal size spectra using asymmetrical flow field-flow fractionation techniques. Ratios of biological index (BIX) to humification index (HIX) show a seasonal change, with lower values in river open seasons and higher values under the ice, and the influence of rive discharge. Three major fluorescence DOM components were identified, including two humic-like components (Ex/Em at 260/480 nm and 250/420 nm, respectively) and one protein-like component (Ex/Em=250/330). The ratio of protein-like to humic-like components was broadly correlated with discharge, with low values during spring freshet and high values under the ice. The relatively high protein-like/humic-like ratio during the ice-covered season suggested sources from macro-organisms and/or ice-algae. Both protein-like and humic-like colloidal fluorophores were partitioned mostly in the 1-5 kDa size fraction although the protein-like fluorophores in some samples also contained larger colloidal size. The relationship between chemical/biological reactivity and size/optical characteristics of DOM needs to be further investigated.

  16. Effect of microstructure on the thermoelectric performance of La{sub 1−x}Sr{sub x}CoO{sub 3}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Viskadourakis, Z.; Department of Mechanical and Manufacturing Engineering, University of Cypruss, 75 Kallipoleos Avenue, P.O. Box 20537, 1678 Nicosia; Athanasopoulos, G.I.

    We present a case where the microstructure has a profound effect on the thermoelectric properties of oxide compounds. Specifically, we have investigated the effect of different sintering treatments on La{sub 1−x}Sr{sub x}CoO{sub 3} samples synthesized using the Pechini method. We found that the samples, which are dense and consist of inhomogeneously-mixed grains of different size, exhibit both higher Seebeck coefficient and thermoelectric figure of merit than the samples, which are porous and consist of grains with almost identical size. The enhancement of Seebeck coefficient in the dense samples is attributed to the so-called “energy-filtering” mechanism that is related to themore » energy barrier of the grain boundary. On the other hand, the thermal conductivity for the porous compounds is significantly reduced in comparison to the dense compounds. It is suggested that a fine-manipulation of grain size ratio combined with a fine-tuning of porosity could considerably enhance the thermoelectric performance of oxides. - Graphical abstract: The enhancement of the dimensionless thermoelectric figure ZT of merit is presented for two equally Sr-doped LaCoO3 compounds, possessing different microstructure, indicating the effect of the latter to the thermoelectric performance of the La{sub 1−x}Sr{sub x}CoO{sub 3} solid solution. - Highlights: • Electrical and thermal transport properties are affected by the microstructure in La{sub 1−x}Sr{sub x}CoO{sub 3} polycrystalline materials. • Coarse/fine grain size distribution enhances the Seebeck coefficient. • Porosity reduces the thermal conductivity in La{sub 1−x}Sr{sub x}CoO{sub 3} polycrystalline samples. • The combination of large/small grain ratio distribution with the high porosity may result to the enhancement of the thermoelectric performance of the material.« less

  17. A qualitative study of parents' perceptions and use of portion size strategies for preschool children's snacks.

    PubMed

    Blake, Christine E; Fisher, Jennifer Orlet; Ganter, Claudia; Younginer, Nicholas; Orloski, Alexandria; Blaine, Rachel E; Bruton, Yasmeen; Davison, Kirsten K

    2015-05-01

    Increases in childhood obesity correspond with shifts in children's snacking behaviors and food portion sizes. This study examined parents' conceptualizations of portion size and the strategies they use to portion snacks in the context of preschool-aged children's snacking. Semi-structured qualitative interviews were conducted with non-Hispanic white (W), African American (AA), and Hispanic (H) low-income parents (n = 60) of preschool-aged children living in Philadelphia and Boston. The interview examined parents' child snacking definitions, purposes, contexts, and frequency. Verbatim transcripts were analyzed using a grounded theory approach. Coding matrices compared responses by race/ethnicity, parent education, and household food security status. Parents' commonly referenced portion sizes when describing children's snacks with phrases like "something small." Snack portion sizes were guided by considerations including healthfulness, location, hunger, and timing. Six strategies for portioning snacks were presented including use of small containers, subdividing large portions, buying prepackaged snacks, use of hand measurement, measuring cups, scales, and letting children determine portion size. Differences in considerations and strategies were seen between race/ethnic groups and by household food security status. Low-income parents of preschool-aged children described a diverse set of considerations and strategies related to portion sizes of snack foods offered to their children. Future studies should examine how these considerations and strategies influence child dietary quality. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. A qualitative study of parents’ perceptions and use of portion size strategies for preschool children’s snacks

    PubMed Central

    Blake, Christine E.; Fisher, Jennifer Orlet; Ganter, Claudia; Younginer, Nicholas; Orloski, Alexandria; Blaine, Rachel E.; Bruton, Yasmeen; Davison, Kirsten K.

    2014-01-01

    Objective Increases in childhood obesity correspond with shifts in children’s snacking behaviors and food portion sizes. This study examined parents’ conceptualizations of portion size and the strategies they use to portion snacks in the context of preschool-aged children’s snacking. Methods Semi-structured qualitative interviews were conducted with non-Hispanic white (W), African American (AA), and Hispanic (H) low-income parents (n=60) of preschool-aged children living in Philadelphia and Boston. The interview examined parents’ child snacking definitions, purposes, contexts, and frequency. Verbatim transcripts were analyzed using a grounded theory approach. Coding matrices compared responses by race/ethnicity, parent education, and household food security status. Results Parents’ commonly referenced portion sizes when describing children’s snacks with phrases like “something small.” Snack portion sizes were guided by considerations including healthfulness, location, hunger, and timing. Six strategies for portioning snacks were presented including use of small containers, subdividing large portions, buying prepackaged snacks, use of hand measurement, measuring cups, scales, and letting children determine portion size. Differences in considerations and strategies were seen between race/ ethnic groups and by household food security status. Conclusions Low-income parents of preschool-aged children described a diverse set of considerations and strategies related to portion sizes of snack foods offered to their children. Future studies should examine how these considerations and strategies influence child dietary quality. PMID:25447008

  19. Impairments of colour vision induced by organic solvents: a meta-analysis study.

    PubMed

    Paramei, Galina V; Meyer-Baron, Monika; Seeber, Andreas

    2004-09-01

    The impairment of colour discrimination induced by occupational exposure to toluene, styrene and mixtures of organic solvents is reviewed and analysed using a meta-analytical approach. Thirty-nine studies were surveyed covering a wide range of exposure conditions. Those studies using the Lanthony Panel D-15 desaturated test (D-15d) were further considered. From these for 15 samples data on colour discrimination ability (Colour Confusion Index, CCI) and exposure levels were provided, required for the meta-analysis. In accordance with previously reported higher CCI values for the exposed groups, the computations yielded positive effect sizes for 13 of the 15 samples, indicating that in the great majority of the studies the exposed groups showed inferior colour discrimination. However, the meta-analysis showed great variation in effect sizes across the studies. Possible reasons for inconsistency among the reported findings are discussed. These pertain to exposure-related parameters, as well as to confounders such as conditions of test administration and characteristics of subject samples. Those factors vary considerably among the studies and might have greatly contributed to divergence in measured colour vision capacity, thereby obscuring consistent effects of organic solvents on colour discrimination.

  20. Genetics of wellbeing and its components satisfaction with life, happiness, and quality of life: a review and meta-analysis of heritability studies.

    PubMed

    Bartels, Meike

    2015-03-01

    Wellbeing is a major topic of research across several disciplines, reflecting the increasing recognition of its strong value across major domains in life. Previous twin-family studies have revealed that individual differences in wellbeing are accounted for by both genetic as well as environmental factors. A systematic literature search identified 30 twin-family studies on wellbeing or a related measure such as satisfaction with life or happiness. Review of these studies showed considerable variation in heritability estimates (ranging from 0 to 64 %), which makes it difficult to draw firm conclusions regarding the genetic influences on wellbeing. For overall wellbeing twelve heritability estimates, from 10 independent studies, were meta-analyzed by computing a sample size weighted average heritability. Ten heritability estimates, derived from 9 independent samples, were used for the meta-analysis of satisfaction with life. The weighted average heritability of wellbeing, based on a sample size of 55,974 individuals, was 36 % (34-38), while the weighted average heritability for satisfaction with life was 32 % (29-35) (n = 47,750). With this result a more robust estimate of the relative influence of genetic effects on wellbeing is provided.

  1. Forecasting eruption size: what we know, what we don't know

    NASA Astrophysics Data System (ADS)

    Papale, Paolo

    2017-04-01

    Any eruption forecast includes an evaluation of the expected size of the forthcoming eruption, usually expressed as the probability associated to given size classes. Such evaluation is mostly based on the previous volcanic history at the specific volcano, or it is referred to a broader class of volcanoes constituting "analogues" of the one under specific consideration. In any case, use of knowledge from past eruptions implies considering the completeness of the reference catalogue, and most importantly, the existence of systematic biases in the catalogue, that may affect probability estimates and translate into biased volcanic hazard forecasts. An analysis of existing catalogues, with major reference to the catalogue from the Smithsonian Global Volcanism Program, suggests that systematic biases largely dominate at global, regional and local scale: volcanic histories reconstructed at individual volcanoes, often used as a reference for volcanic hazard forecasts, are the result of systematic loss of information with time and poor sample representativeness. That situation strictly requires the use of techniques to complete existing catalogues, as well as careful consideration of the uncertainties deriving from inadequate knowledge and model-dependent data elaboration. A reconstructed global eruption size distribution, obtained by merging information from different existing catalogues, shows a mode in the VEI 1-2 range, <0.1% incidence of eruptions with VEI 7 or larger, and substantial uncertainties associated with individual VEI frequencies. Even larger uncertainties are expected to derive from application to individual volcanoes or classes of analogue volcanoes, suggesting large to very large uncertainties associated to volcanic hazard forecasts virtually at any individual volcano worldwide.

  2. An audit strategy for time-to-event outcomes measured with error: application to five randomized controlled trials in oncology.

    PubMed

    Dodd, Lori E; Korn, Edward L; Freidlin, Boris; Gu, Wenjuan; Abrams, Jeffrey S; Bushnell, William D; Canetta, Renzo; Doroshow, James H; Gray, Robert J; Sridhara, Rajeshwari

    2013-10-01

    Measurement error in time-to-event end points complicates interpretation of treatment effects in clinical trials. Non-differential measurement error is unlikely to produce large bias [1]. When error depends on treatment arm, bias is of greater concern. Blinded-independent central review (BICR) of all images from a trial is commonly undertaken to mitigate differential measurement-error bias that may be present in hazard ratios (HRs) based on local evaluations. Similar BICR and local evaluation HRs may provide reassurance about the treatment effect, but BICR adds considerable time and expense to trials. We describe a BICR audit strategy [2] and apply it to five randomized controlled trials to evaluate its use and to provide practical guidelines. The strategy requires BICR on a subset of study subjects, rather than a complete-case BICR, and makes use of an auxiliary-variable estimator. When the effect size is relatively large, the method provides a substantial reduction in the size of the BICRs. In a trial with 722 participants and a HR of 0.48, an average audit of 28% of the data was needed and always confirmed the treatment effect as assessed by local evaluations. More moderate effect sizes and/or smaller trial sizes required larger proportions of audited images, ranging from 57% to 100% for HRs ranging from 0.55 to 0.77 and sample sizes between 209 and 737. The method is developed for a simple random sample of study subjects. In studies with low event rates, more efficient estimation may result from sampling individuals with events at a higher rate. The proposed strategy can greatly decrease the costs and time associated with BICR, by reducing the number of images undergoing review. The savings will depend on the underlying treatment effect and trial size, with larger treatment effects and larger trials requiring smaller proportions of audited data.

  3. Distribution and diversity of cytotypes in Dianthus broteri as evidenced by genome size variations

    PubMed Central

    Balao, Francisco; Casimiro-Soriguer, Ramón; Talavera, María; Herrera, Javier; Talavera, Salvador

    2009-01-01

    Background and Aims Studying the spatial distribution of cytotypes and genome size in plants can provide valuable information about the evolution of polyploid complexes. Here, the spatial distribution of cytological races and the amount of DNA in Dianthus broteri, an Iberian carnation with several ploidy levels, is investigated. Methods Sample chromosome counts and flow cytometry (using propidium iodide) were used to determine overall genome size (2C value) and ploidy level in 244 individuals of 25 populations. Both fresh and dried samples were investigated. Differences in 2C and 1Cx values among ploidy levels within biogeographical provinces were tested using ANOVA. Geographical correlations of genome size were also explored. Key Results Extensive variation in chromosomes numbers (2n = 2x = 30, 2n = 4x = 60, 2n = 6x = 90 and 2n = 12x =180) was detected, and the dodecaploid cytotype is reported for the first time in this genus. As regards cytotype distribution, six populations were diploid, 11 were tetraploid, three were hexaploid and five were dodecaploid. Except for one diploid population containing some triploid plants (2n = 45), the remaining populations showed a single cytotype. Diploids appeared in two disjunct areas (south-east and south-west), and so did tetraploids (although with a considerably wider geographic range). Dehydrated leaf samples provided reliable measurements of DNA content. Genome size varied significantly among some cytotypes, and also extensively within diploid (up to 1·17-fold) and tetraploid (1·22-fold) populations. Nevertheless, variations were not straightforwardly congruent with ecology and geographical distribution. Conclusions Dianthus broteri shows the highest diversity of cytotypes known to date in the genus Dianthus. Moreover, some cytotypes present remarkable internal genome size variation. The evolution of the complex is discussed in terms of autopolyploidy, with primary and secondary contact zones. PMID:19633312

  4. Post hoc analyses: after the facts.

    PubMed

    Srinivas, Titte R; Ho, Bing; Kang, Joseph; Kaplan, Bruce

    2015-01-01

    Prospective clinical trials are constructed with high levels of internal validity. Sample size and power considerations usually address primary endpoints. Primary endpoints have traditionally included events that are becoming increasingly less common and thus have led to growing use of composite endpoints and noninferiority trial designs in transplantation. This approach may mask real clinical benefit in one or the other domain with regard to either clinically relevant secondary endpoints or other unexpected findings. In addition, endpoints solely chosen based on power considerations are prone to misjudgment of actual treatment effect size as well as consistency of that effect. In the instances where treatment effects may have been underestimated, valuable information may be lost if buried within a composite endpoint. In all these cases, analyses and post hoc analyses of data become relevant in informing practitioners about clinical benefits or safety signals that may not be captured by the primary endpoint. On the other hand, there are many pitfalls in using post hoc determined endpoints. This short review is meant to allow readers to appreciate post hoc analysis not as an entity with a single approach, but rather as an analysis with unique limitations and strengths that often raise new questions to be addressed in further inquiries.

  5. The particle size distribution, density, and specific surface area of welding fumes from SMAW and GMAW mild and stainless steel consumables.

    PubMed

    Hewett, P

    1995-02-01

    Particle size distributions were measured for fumes from mild steel (MS) and stainless steel (SS); shielded metal arc welding (SMAW) and gas metal arc welding (GMAW) consumables. Up to six samples of each type of fume were collected in a test chamber using a micro-orifice uniform deposit (cascade) impactor. Bulk samples were collected for bulk fume density and specific surface area analysis. Additional impactor samples were collected using polycarbonate substrates and analyzed for elemental content. The parameters of the underlying mass distributions were estimated using a nonlinear least squares analysis method that fits a smooth curve to the mass fraction distribution histograms of all samples for each type of fume. The mass distributions for all four consumables were unimodal and well described by a lognormal distribution; with the exception of the GMAW-MS and GMAW-SS comparison, they were statistically different. The estimated mass distribution geometric means for the SMAW-MS and SMAW-SS consumables were 0.59 and 0.46 micron aerodynamic equivalent diameter (AED), respectively, and 0.25 micron AED for both the GMAW-MS and GMAW-SS consumables. The bulk fume densities and specific surface areas were similar for the SMAW-MS and SMAW-SS consumables and for the GMAW-MS and GMAW-SS consumables, but differed between SMAW and GMAW. The distribution of metals was similar to the mass distributions. Particle size distributions and physical properties of the fumes were considerably different when categorized by welding method. Within each welding method there was little difference between MS and SS fumes.

  6. The Effectiveness of Teamwork Training on Teamwork Behaviors and Team Performance: A Systematic Review and Meta-Analysis of Controlled Interventions

    PubMed Central

    McEwan, Desmond; Ruissen, Geralyn R.; Eys, Mark A.; Zumbo, Bruno D.; Beauchamp, Mark R.

    2017-01-01

    The objective of this study was to conduct a systematic review and meta-analysis of teamwork interventions that were carried out with the purpose of improving teamwork and team performance, using controlled experimental designs. A literature search returned 16,849 unique articles. The meta-analysis was ultimately conducted on 51 articles, comprising 72 (k) unique interventions, 194 effect sizes, and 8439 participants, using a random effects model. Positive and significant medium-sized effects were found for teamwork interventions on both teamwork and team performance. Moderator analyses were also conducted, which generally revealed positive and significant effects with respect to several sample, intervention, and measurement characteristics. Implications for effective teamwork interventions as well as considerations for future research are discussed. PMID:28085922

  7. Survey design research: a tool for answering nursing research questions.

    PubMed

    Siedlecki, Sandra L; Butler, Robert S; Burchill, Christian N

    2015-01-01

    The clinical nurse specialist is in a unique position to identify and study clinical problems in need of answers, but lack of time and resources may discourage nurses from conducting research. However, some research methods can be used by the clinical nurse specialist that are not time-intensive or cost prohibitive. The purpose of this article is to explain the utility of survey methodology for answering a number of nursing research questions. The article covers survey content, reliability and validity issues, sample size considerations, and methods of survey delivery.

  8. Considerations for the design, analysis and presentation of in vivo studies.

    PubMed

    Ranstam, J; Cook, J A

    2017-03-01

    To describe, explain and give practical suggestions regarding important principles and key methodological challenges in the study design, statistical analysis, and reporting of results from in vivo studies. Pre-specifying endpoints and analysis, recognizing the common underlying assumption of statistically independent observations, performing sample size calculations, and addressing multiplicity issues are important parts of an in vivo study. A clear reporting of results and informative graphical presentations of data are other important parts. Copyright © 2016 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  9. DLP NIRscan Nano: an ultra-mobile DLP-based near-infrared Bluetooth spectrometer

    NASA Astrophysics Data System (ADS)

    Gelabert, Pedro; Pruett, Eric; Perrella, Gavin; Subramanian, Sreeram; Lakshminarayanan, Aravind

    2016-02-01

    The DLP NIRscan Nano is an ultra-portable spectrometer evaluation module utilizing DLP technology to meet lower cost, smaller size, and higher performance than traditional architectures. The replacement of a linear array detector with DLP digital micromirror device (DMD) in conjunction with a single point detector adds the functionality of programmable spectral filters and sampling techniques that were not previously available on NIR spectrometers. This paper presents the hardware, software, and optical systems of the DLP NIRscan Nano and its design considerations on the implementation of a DLP-based spectrometer.

  10. Value of information methods to design a clinical trial in a small population to optimise a health economic utility function.

    PubMed

    Pearce, Michael; Hee, Siew Wan; Madan, Jason; Posch, Martin; Day, Simon; Miller, Frank; Zohar, Sarah; Stallard, Nigel

    2018-02-08

    Most confirmatory randomised controlled clinical trials (RCTs) are designed with specified power, usually 80% or 90%, for a hypothesis test conducted at a given significance level, usually 2.5% for a one-sided test. Approval of the experimental treatment by regulatory agencies is then based on the result of such a significance test with other information to balance the risk of adverse events against the benefit of the treatment to future patients. In the setting of a rare disease, recruiting sufficient patients to achieve conventional error rates for clinically reasonable effect sizes may be infeasible, suggesting that the decision-making process should reflect the size of the target population. We considered the use of a decision-theoretic value of information (VOI) method to obtain the optimal sample size and significance level for confirmatory RCTs in a range of settings. We assume the decision maker represents society. For simplicity we assume the primary endpoint to be normally distributed with unknown mean following some normal prior distribution representing information on the anticipated effectiveness of the therapy available before the trial. The method is illustrated by an application in an RCT in haemophilia A. We explicitly specify the utility in terms of improvement in primary outcome and compare this with the costs of treating patients, both financial and in terms of potential harm, during the trial and in the future. The optimal sample size for the clinical trial decreases as the size of the population decreases. For non-zero cost of treating future patients, either monetary or in terms of potential harmful effects, stronger evidence is required for approval as the population size increases, though this is not the case if the costs of treating future patients are ignored. Decision-theoretic VOI methods offer a flexible approach with both type I error rate and power (or equivalently trial sample size) depending on the size of the future population for whom the treatment under investigation is intended. This might be particularly suitable for small populations when there is considerable information about the patient population.

  11. Molecular dynamics simulations using temperature-enhanced essential dynamics replica exchange.

    PubMed

    Kubitzki, Marcus B; de Groot, Bert L

    2007-06-15

    Today's standard molecular dynamics simulations of moderately sized biomolecular systems at full atomic resolution are typically limited to the nanosecond timescale and therefore suffer from limited conformational sampling. Efficient ensemble-preserving algorithms like replica exchange (REX) may alleviate this problem somewhat but are still computationally prohibitive due to the large number of degrees of freedom involved. Aiming at increased sampling efficiency, we present a novel simulation method combining the ideas of essential dynamics and REX. Unlike standard REX, in each replica only a selection of essential collective modes of a subsystem of interest (essential subspace) is coupled to a higher temperature, with the remainder of the system staying at a reference temperature, T(0). This selective excitation along with the replica framework permits efficient approximate ensemble-preserving conformational sampling and allows much larger temperature differences between replicas, thereby considerably enhancing sampling efficiency. Ensemble properties and sampling performance of the method are discussed using dialanine and guanylin test systems, with multi-microsecond molecular dynamics simulations of these test systems serving as references.

  12. Tailoring magnetic properties of Co nanocluster assembled films using hydrogen

    NASA Astrophysics Data System (ADS)

    Romero, C. P.; Volodin, A.; Paddubrouskaya, H.; Van Bael, M. J.; Van Haesendonck, C.; Lievens, P.

    2018-07-01

    Tailoring magnetic properties in nanocluster assembled cobalt (Co) thin films was achieved by admitting a small percentage of H2 gas (∼2%) into the Co gas phase cluster formation chamber prior to deposition. The oxygen content in the films is considerably reduced by the presence of hydrogen during the cluster formation, leading to enhanced magnetic interactions between clusters. Two sets of Co samples were fabricated, one without hydrogen gas and one with hydrogen gas. Magnetic properties of the non-hydrogenated and the hydrogen-treated Co nanocluster assembled films are comparatively studied using magnetic force microscopy and vibrating sample magnetometry. When comparing the two sets of samples the considerably larger coercive field of the H2-treated Co nanocluster film and the extended micrometer-sized magnetic domain structure confirm the enhancement of magnetic interactions between clusters. The thickness of the antiferromagnetic CoO layer is controlled with this procedure and modifies the exchange bias effect in these films. The exchange bias shift is lower for the H2-treated Co nanocluster film, which indicates that a thinner antiferromagnetic CoO reduces the coupling with the ferromagnetic Co. The hydrogen-treatment method can be used to tailor the oxidation levels thus controlling the magnetic properties of ferromagnetic cluster-assembled films.

  13. The international growth standard for preadolescent and adolescent children: statistical considerations.

    PubMed

    Cole, T J

    2006-12-01

    This article discusses statistical considerations for the design of a new study intended to provide an International Growth Standard for Preadolescent and Adolescent Children, including issues such as cross-sectional, longitudinal, and mixed designs; sample-size derivation for the number of populations and number of children per population; modeling of growth centiles of height, weight, and other measurements; and modeling of the adolescent growth spurt. The conclusions are that a mixed longitudinal design will provide information on both growth distance and velocity; samples of children from 5 to 10 sites should be suitable for an international standard (based on political rather than statistical arguments); the samples should be broadly uniform across age but oversampled during puberty, and should include data into adulthood. The LMS method is recommended for constructing measurement centiles, and parametric or semiparametric approaches are available to estimate the timing of the adolescent growth spurt in individuals. If the new standard is to be grafted onto the 2006 World Health Organization (WHO) reference, caution is needed at the join point of 5 years, where children from the new standard are likely to be appreciably more obese than those from the WHO reference, due to the rising trends in obesity and the time gap in data collection between the two surveys.

  14. Scientific Misconduct.

    PubMed

    Gross, Charles

    2016-01-01

    Scientific misconduct has been defined as fabrication, falsification, and plagiarism. Scientific misconduct has occurred throughout the history of science. The US government began to take systematic interest in such misconduct in the 1980s. Since then, a number of studies have examined how frequently individual scientists have observed scientific misconduct or were involved in it. Although the studies vary considerably in their methodology and in the nature and size of their samples, in most studies at least 10% of the scientists sampled reported having observed scientific misconduct. In addition to studies of the incidence of scientific misconduct, this review considers the recent increase in paper retractions, the role of social media in scientific ethics, several instructional examples of egregious scientific misconduct, and potential methods to reduce research misconduct.

  15. Interlinking backscatter, grain size and benthic community structure

    NASA Astrophysics Data System (ADS)

    McGonigle, Chris; Collier, Jenny S.

    2014-06-01

    The relationship between acoustic backscatter, sediment grain size and benthic community structure is examined using three different quantitative methods, covering image- and angular response-based approaches. Multibeam time-series backscatter (300 kHz) data acquired in 2008 off the coast of East Anglia (UK) are compared with grain size properties, macrofaunal abundance and biomass from 130 Hamon and 16 Clamshell grab samples. Three predictive methods are used: 1) image-based (mean backscatter intensity); 2) angular response-based (predicted mean grain size), and 3) image-based (1st principal component and classification) from Quester Tangent Corporation Multiview software. Relationships between grain size and backscatter are explored using linear regression. Differences in grain size and benthic community structure between acoustically defined groups are examined using ANOVA and PERMANOVA+. Results for the Hamon grab stations indicate significant correlations between measured mean grain size and mean backscatter intensity, angular response predicted mean grain size, and 1st principal component of QTC analysis (all p < 0.001). Results for the Clamshell grab for two of the methods have stronger positive correlations; mean backscatter intensity (r2 = 0.619; p < 0.001) and angular response predicted mean grain size (r2 = 0.692; p < 0.001). ANOVA reveals significant differences in mean grain size (Hamon) within acoustic groups for all methods: mean backscatter (p < 0.001), angular response predicted grain size (p < 0.001), and QTC class (p = 0.009). Mean grain size (Clamshell) shows a significant difference between groups for mean backscatter (p = 0.001); other methods were not significant. PERMANOVA for the Hamon abundance shows benthic community structure was significantly different between acoustic groups for all methods (p ≤ 0.001). Overall these results show considerable promise in that more than 60% of the variance in the mean grain size of the Clamshell grab samples can be explained by mean backscatter or acoustically-predicted grain size. These results show that there is significant predictive capacity for sediment characteristics from multibeam backscatter and that these acoustic classifications can have ecological validity.

  16. Elucidating the ensemble of functionally-relevant transitions in protein systems with a robotics-inspired method.

    PubMed

    Molloy, Kevin; Shehu, Amarda

    2013-01-01

    Many proteins tune their biological function by transitioning between different functional states, effectively acting as dynamic molecular machines. Detailed structural characterization of transition trajectories is central to understanding the relationship between protein dynamics and function. Computational approaches that build on the Molecular Dynamics framework are in principle able to model transition trajectories at great detail but also at considerable computational cost. Methods that delay consideration of dynamics and focus instead on elucidating energetically-credible conformational paths connecting two functionally-relevant structures provide a complementary approach. Effective sampling-based path planning methods originating in robotics have been recently proposed to produce conformational paths. These methods largely model short peptides or address large proteins by simplifying conformational space. We propose a robotics-inspired method that connects two given structures of a protein by sampling conformational paths. The method focuses on small- to medium-size proteins, efficiently modeling structural deformations through the use of the molecular fragment replacement technique. In particular, the method grows a tree in conformational space rooted at the start structure, steering the tree to a goal region defined around the goal structure. We investigate various bias schemes over a progress coordinate for balance between coverage of conformational space and progress towards the goal. A geometric projection layer promotes path diversity. A reactive temperature scheme allows sampling of rare paths that cross energy barriers. Experiments are conducted on small- to medium-size proteins of length up to 214 amino acids and with multiple known functionally-relevant states, some of which are more than 13Å apart of each-other. Analysis reveals that the method effectively obtains conformational paths connecting structural states that are significantly different. A detailed analysis on the depth and breadth of the tree suggests that a soft global bias over the progress coordinate enhances sampling and results in higher path diversity. The explicit geometric projection layer that biases the exploration away from over-sampled regions further increases coverage, often improving proximity to the goal by forcing the exploration to find new paths. The reactive temperature scheme is shown effective in increasing path diversity, particularly in difficult structural transitions with known high-energy barriers.

  17. Knowledge level of effect size statistics, confidence intervals and meta-analysis in Spanish academic psychologists.

    PubMed

    Badenes-Ribera, Laura; Frias-Navarro, Dolores; Pascual-Soler, Marcos; Monterde-I-Bort, Héctor

    2016-11-01

    The statistical reform movement and the American Psychological Association (APA) defend the use of estimators of the effect size and its confidence intervals, as well as the interpretation of the clinical significance of the findings. A survey was conducted in which academic psychologists were asked about their behavior in designing and carrying out their studies. The sample was composed of 472 participants (45.8% men). The mean number of years as a university professor was 13.56 years (SD= 9.27). The use of effect-size estimators is becoming generalized, as well as the consideration of meta-analytic studies. However, several inadequate practices still persist. A traditional model of methodological behavior based on statistical significance tests is maintained, based on the predominance of Cohen’s d and the unadjusted R2/η2, which are not immune to outliers or departure from normality and the violations of statistical assumptions, and the under-reporting of confidence intervals of effect-size statistics. The paper concludes with recommendations for improving statistical practice.

  18. Determinants of Awareness, Consideration, and Choice Set Size in University Choice.

    ERIC Educational Resources Information Center

    Dawes, Philip L.; Brown, Jennifer

    2002-01-01

    Developed and tested a model of students' university "brand" choice using five individual-level variables (ethnic group, age, gender, number of parents going to university, and academic ability) and one situational variable (duration of search) to explain variation in the sizes of awareness, consideration, and choice decision sets. (EV)

  19. Bayesian methods for the design and interpretation of clinical trials in very rare diseases

    PubMed Central

    Hampson, Lisa V; Whitehead, John; Eleftheriou, Despina; Brogan, Paul

    2014-01-01

    This paper considers the design and interpretation of clinical trials comparing treatments for conditions so rare that worldwide recruitment efforts are likely to yield total sample sizes of 50 or fewer, even when patients are recruited over several years. For such studies, the sample size needed to meet a conventional frequentist power requirement is clearly infeasible. Rather, the expectation of any such trial has to be limited to the generation of an improved understanding of treatment options. We propose a Bayesian approach for the conduct of rare-disease trials comparing an experimental treatment with a control where patient responses are classified as a success or failure. A systematic elicitation from clinicians of their beliefs concerning treatment efficacy is used to establish Bayesian priors for unknown model parameters. The process of determining the prior is described, including the possibility of formally considering results from related trials. As sample sizes are small, it is possible to compute all possible posterior distributions of the two success rates. A number of allocation ratios between the two treatment groups can be considered with a view to maximising the prior probability that the trial concludes recommending the new treatment when in fact it is non-inferior to control. Consideration of the extent to which opinion can be changed, even by data from the best feasible design, can help to determine whether such a trial is worthwhile. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24957522

  20. Towards well-defined gold nanomaterials via diafiltration and aptamer mediated synthesis

    NASA Astrophysics Data System (ADS)

    Sweeney, Scott Francis

    Gold nanoparticles have garnered recent attention due to their intriguing size- and shape-dependent properties. Routine access to well-defined gold nanoparticle samples in terms of core diameter, shape, peripheral functionality and purity is required in order to carry out fundamental studies of their properties and to utilize these properties in future applications. For this reason, the development of methods for preparing well-defined gold nanoparticle samples remains an area of active research in materials science. In this dissertation, two methods, diafiltration and aptamer mediated synthesis, are explored as possible routes towards well-defined gold nanoparticle samples. It is shown that diafiltration has considerable potential for the efficient and convenient purification and size separation of water-soluble nanoparticles. The suitability of diafiltration for (i) the purification of water-soluble gold nanoparticles, (ii) the separation of a bimodal distribution of nanoparticles into fractions, (iii) the fractionation of a polydisperse sample and (iv) the isolation of [rimers from monomers and aggregates is studied. NMR, thermogravimetric analysis (TGA), and X-ray photoelectron spectroscopy (XPS) measurements demonstrate that diafiltration produces highly pure nanoparticles. UV-visible spectroscopic and transmission electron microscopic analyses show that diafiltration offers the ability to separate nanoparticles of disparate core size, including linked nanoparticles. These results demonstrate the applicability of diafiltration for the rapid and green preparation of high-purity gold nanoparticle samples and the size separation of heterogeneous nanoparticle samples. In the second half of the dissertation, the identification of materials specific aptamers and their use to synthesize shaped gold nanoparticles is explored. The use of in vitro selection for identifying materials specific peptide and oligonucleotide aptamers is reviewed, outlining the specific requirements of in vitro selection for materials and the ways in which the field can be advanced. A promising new technique, in vitro selection on surfaces (ISOS), is developed and the discovery using ISOS of RNA aptamers that bind to evaporated gold is discussed. Analysis of the isolated gold binding RNA aptamers indicates that they are highly structured with single-stranded polyadenosine binding motifs. These aptamers, and similarly isolated peptide aptamers, are briefly explored for their ability to synthesize gold nanoparticles. This dissertation contains both previously published and unpublished co-authored material.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altabet, Y. Elia; Debenedetti, Pablo G., E-mail: pdebene@princeton.edu; Stillinger, Frank H.

    In particle systems with cohesive interactions, the pressure-density relationship of the mechanically stable inherent structures sampled along a liquid isotherm (i.e., the equation of state of an energy landscape) will display a minimum at the Sastry density ρ{sub S}. The tensile limit at ρ{sub S} is due to cavitation that occurs upon energy minimization, and previous characterizations of this behavior suggested that ρ{sub S} is a spinodal-like limit that separates all homogeneous and fractured inherent structures. Here, we revisit the phenomenology of Sastry behavior and find that it is subject to considerable finite-size effects, and the development of the inherentmore » structure equation of state with system size is consistent with the finite-size rounding of an athermal phase transition. What appears to be a continuous spinodal-like point at finite system sizes becomes discontinuous in the thermodynamic limit, indicating behavior akin to a phase transition. We also study cavitation in glassy packings subjected to athermal expansion. Many individual expansion trajectories averaged together produce a smooth equation of state, which we find also exhibits features of finite-size rounding, and the examples studied in this work give rise to a larger limiting tension than for the corresponding landscape equation of state.« less

  2. Characterization of dust from blast furnace cast house de-dusting.

    PubMed

    Lanzerstorfer, Christof

    2017-10-01

    During casting of liquid iron and slag, a considerable amount of dust is emitted into the cast house of a blast furnace (BF). Usually, this dust is extracted via exhaust hoods and subsequently separated from the ventilation air. In most BFs the cast house dust is recycled. In this study a sample of cast house dust was split by air classification into five size fractions, which were then analysed. Micrographs showed that the dominating particle type in all size fractions is that of single spherical-shaped particles. However, some irregular-shaped particles were also found and in the finest size fraction also some agglomerates were present. Almost spherical particles consisted of Fe and O, while highly irregular-shaped particles consisted of C. The most abundant element was Fe, followed by Ca and C. These elements were distributed relatively uniformly in the size fractions. As, Cd, Cu, K, Pb, S, Sb and Zn were enriched significantly in the fine size fractions. Thus, air classification would be an effective method for improved recycling. By separating a small fraction of fines (about 10-20%), a reduction of the mass of Zn in the coarse dust recycled in the range of 40-55% would be possible.

  3. Spatial Sampling of Weather Data for Regional Crop Yield Simulations

    NASA Technical Reports Server (NTRS)

    Van Bussel, Lenny G. J.; Ewert, Frank; Zhao, Gang; Hoffmann, Holger; Enders, Andreas; Wallach, Daniel; Asseng, Senthold; Baigorria, Guillermo A.; Basso, Bruno; Biernath, Christian; hide

    2016-01-01

    Field-scale crop models are increasingly applied at spatio-temporal scales that range from regions to the globe and from decades up to 100 years. Sufficiently detailed data to capture the prevailing spatio-temporal heterogeneity in weather, soil, and management conditions as needed by crop models are rarely available. Effective sampling may overcome the problem of missing data but has rarely been investigated. In this study the effect of sampling weather data has been evaluated for simulating yields of winter wheat in a region in Germany over a 30-year period (1982-2011) using 12 process-based crop models. A stratified sampling was applied to compare the effect of different sizes of spatially sampled weather data (10, 30, 50, 100, 500, 1000 and full coverage of 34,078 sampling points) on simulated wheat yields. Stratified sampling was further compared with random sampling. Possible interactions between sample size and crop model were evaluated. The results showed differences in simulated yields among crop models but all models reproduced well the pattern of the stratification. Importantly, the regional mean of simulated yields based on full coverage could already be reproduced by a small sample of 10 points. This was also true for reproducing the temporal variability in simulated yields but more sampling points (about 100) were required to accurately reproduce spatial yield variability. The number of sampling points can be smaller when a stratified sampling is applied as compared to a random sampling. However, differences between crop models were observed including some interaction between the effect of sampling on simulated yields and the model used. We concluded that stratified sampling can considerably reduce the number of required simulations. But, differences between crop models must be considered as the choice for a specific model can have larger effects on simulated yields than the sampling strategy. Assessing the impact of sampling soil and crop management data for regional simulations of crop yields is still needed.

  4. MEPAG Recommendations for a 2018 Mars Sample Return Caching Lander - Sample Types, Number, and Sizes

    NASA Technical Reports Server (NTRS)

    Allen, Carlton C.

    2011-01-01

    The return to Earth of geological and atmospheric samples from the surface of Mars is among the highest priority objectives of planetary science. The MEPAG Mars Sample Return (MSR) End-to-End International Science Analysis Group (MEPAG E2E-iSAG) was chartered to propose scientific objectives and priorities for returned sample science, and to map out the implications of these priorities, including for the proposed joint ESA-NASA 2018 mission that would be tasked with the crucial job of collecting and caching the samples. The E2E-iSAG identified four overarching scientific aims that relate to understanding: (A) the potential for life and its pre-biotic context, (B) the geologic processes that have affected the martian surface, (C) planetary evolution of Mars and its atmosphere, (D) potential for future human exploration. The types of samples deemed most likely to achieve the science objectives are, in priority order: (1A). Subaqueous or hydrothermal sediments (1B). Hydrothermally altered rocks or low temperature fluid-altered rocks (equal priority) (2). Unaltered igneous rocks (3). Regolith, including airfall dust (4). Present-day atmosphere and samples of sedimentary-igneous rocks containing ancient trapped atmosphere Collection of geologically well-characterized sample suites would add considerable value to interpretations of all collected rocks. To achieve this, the total number of rock samples should be about 30-40. In order to evaluate the size of individual samples required to meet the science objectives, the E2E-iSAG reviewed the analytical methods that would likely be applied to the returned samples by preliminary examination teams, for planetary protection (i.e., life detection, biohazard assessment) and, after distribution, by individual investigators. It was concluded that sample size should be sufficient to perform all high-priority analyses in triplicate. In keeping with long-established curatorial practice of extraterrestrial material, at least 40% by mass of each sample should be preserved to support future scientific investigations. Samples of 15-16 grams are considered optimal. The total mass of returned rocks, soils, blanks and standards should be approximately 500 grams. Atmospheric gas samples should be the equivalent of 50 cubic cm at 20 times Mars ambient atmospheric pressure.

  5. Semi-automatic surface sediment sampling system - A prototype to be implemented in bivalve fishing surveys

    NASA Astrophysics Data System (ADS)

    Rufino, Marta M.; Baptista, Paulo; Pereira, Fábio; Gaspar, Miguel B.

    2018-01-01

    In the current work we propose a new method to sample surface sediment during bivalve fishing surveys. Fishing institutes all around the word carry out regular surveys with the aim of monitoring the stocks of commercial species. These surveys comprise often more than one hundred of sampling stations and cover large geographical areas. Although superficial sediment grain sizes are among the main drivers of benthic communities and provide crucial information for studies on coastal dynamics, overall there is a strong lack of this type of data, possibly, because traditional surface sediment sampling methods use grabs, that require considerable time and effort to be carried out on regular basis or on large areas. In face of these aspects, we developed an easy and un-expensive method to sample superficial sediments, during bivalve fisheries monitoring surveys, without increasing survey time or human resources. The method was successfully evaluated and validated during a typical bivalve survey carried out on the Northwest coast of Portugal, confirming that it had any interference with the survey objectives. Furthermore, the method was validated by collecting samples using a traditional Van Veen grabs (traditional method), which showed a similar grain size composition to the ones collected by the new method, on the same localities. We recommend that the procedure is implemented on regular bivalve fishing surveys, together with an image analysis system to analyse the collected samples. The new method will provide substantial quantity of data on surface sediment in coastal areas, using a non-expensive and efficient manner, with a high potential application in different fields of research.

  6. Technical assessment of processing plants as exemplified by the sorting of beverage cartons from lightweight packaging wastes.

    PubMed

    Feil, A; Thoden van Velzen, E U; Jansen, M; Vitz, P; Go, N; Pretz, T

    2016-02-01

    The recovery of beverage cartons (BC) in three lightweight packaging waste processing plants (LP) was analyzed with different input materials and input masses in the area of 21-50Mg. The data was generated by gravimetric determination of the sorting products, sampling and sorting analysis. Since the particle size of beverage cartons is larger than 120mm, a modified sampling plan was implemented and targeted multiple sampling (3-11 individual samplings) and a total sample size of respectively 1200l (ca. 60kg) for the BC-products and of about 2400l (ca. 120kg) for material-heterogeneous mixed plastics (MP) and sorting residue products. The results infer that the quantification of the beverage carton yield in the process, i.e., by including all product-containing material streams, can be specified only with considerable fluctuation ranges. Consequently, the total assessment, regarding all product streams, is rather qualitative than quantitative. Irregular operation conditions as well as unfavorable sampling conditions and capacity overloads are likely causes for high confidence intervals. From the results of the current study, recommendations can basically be derived for a better sampling in LP-processing plants. Despite of the suboptimal statistical results, the results indicate very clear that the plants show definite optimisation potentials with regard to the yield of beverage cartons as well as the required product purity. Due to the test character of the sorting trials the plant parameterization was not ideal for this sorting task and consequently the results should be interpreted with care. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Characterizing the phytoplankton soup: pump and plumbing effects on the particle assemblage in underway optical seawater systems.

    PubMed

    Cetinić, Ivona; Poulton, Nicole; Slade, Wayne H

    2016-09-05

    Many optical and biogeochemical data sets, crucial for algorithm development and satellite data validation, are collected using underway seawater systems over the course of research cruises. Phytoplankton and particle size distribution (PSD) in the ocean is a key measurement, required in oceanographic research and ocean optics. Using a data set collected in the North Atlantic, spanning different oceanic water types, we outline the differences observed in concurrent samples collected from two different flow-through systems: a permanently plumbed science seawater supply with an impeller pump, and an independent system with shorter, clean tubing runs and a diaphragm pump. We observed an average of 40% decrease in phytoplankton counts, and significant changes to the PSD in 10-45 µm range, when comparing impeller and diaphragm pump systems. Change in PSD seems to be more dependent on the type of the phytoplankton, than the size, with photosynthetic ciliates displaying the largest decreases in cell counts (78%). Comparison of chlorophyll concentrations across the two systems demonstrated lower sensitivity to sampling system type. Observed changes in several measured biogeochemical parameters (associated with phytoplankton size distribution) using the two sampling systems, should be used as a guide towards building best practices when it comes to the deployment of flow-through systems in the field for examining optics and biogeochemistry. Using optical models, we evaluated potential impact of the observed change in measured phytoplankton size spectra onto scattering measurements, resulting in significant differences between modeled optical properties across systems (~40%). Researchers should be aware of the methods used with previously collected data sets, and take into consideration the potentially significant and highly variable ecosystem-dependent biases in designing field studies in the future.

  8. Determination of the Thermal Properties of Sands as Affected by Water Content, Drainage/Wetting, and Porosity Conditions for Sands With Different Grain Sizes

    NASA Astrophysics Data System (ADS)

    Smits, K. M.; Sakaki, T.; Limsuwat, A.; Illangasekare, T. H.

    2009-05-01

    It is widely recognized that liquid water, water vapor and temperature movement in the subsurface near the land/atmosphere interface are strongly coupled, influencing many agricultural, biological and engineering applications such as irrigation practices, the assessment of contaminant transport and the detection of buried landmines. In these systems, a clear understanding of how variations in water content, soil drainage/wetting history, porosity conditions and grain size affect the soil's thermal behavior is needed, however, the consideration of all factors is rare as very few experimental data showing the effects of these variations are available. In this study, the effect of soil moisture, drainage/wetting history, and porosity on the thermal conductivity of sandy soils with different grain sizes was investigated. For this experimental investigation, several recent sensor based technologies were compiled into a Tempe cell modified to have a network of sampling ports, continuously monitoring water saturation, capillary pressure, temperature, and soil thermal properties. The water table was established at mid elevation of the cell and then lowered slowly. The initially saturated soil sample was subjected to slow drainage, wetting, and secondary drainage cycles. After liquid water drainage ceased, evaporation was induced at the surface to remove soil moisture from the sample to obtain thermal conductivity data below the residual saturation. For the test soils studied, thermal conductivity increased with increasing moisture content, soil density and grain size while thermal conductivity values were similar for soil drying/wetting behavior. Thermal properties measured in this study were then compared with independent estimates made using empirical models from literature. These soils will be used in a proposed set of experiments in intermediate scale test tanks to obtain data to validate methods and modeling tools used for landmine detection.

  9. Transport of dissolved organic matter in Boom Clay: Size effects

    NASA Astrophysics Data System (ADS)

    Durce, D.; Aertsens, M.; Jacques, D.; Maes, N.; Van Gompel, M.

    2018-01-01

    A coupled experimental-modelling approach was developed to evaluate the effects of molecular weight (MW) of dissolved organic matter (DOM) on its transport through intact Boom Clay (BC) samples. Natural DOM was sampled in-situ in the BC layer. Transport was investigated with percolation experiments on 1.5 cm BC samples by measuring the outflow MW distribution (MWD) by size exclusion chromatography (SEC). A one-dimensional reactive transport model was developed to account for retardation, diffusion and entrapment (attachment and/or straining) of DOM. These parameters were determined along the MWD by implementing a discretisation of DOM into several MW points and modelling the breakthrough of each point. The pore throat diameter of BC was determined as 6.6-7.6 nm. Below this critical size, transport of DOM is MW dependent and two major types of transport were identified. Below MW of 2 kDa, DOM was neither strongly trapped nor strongly retarded. This fraction had an averaged capacity factor of 1.19 ± 0.24 and an apparent dispersion coefficient ranging from 7.5 × 10- 11 to 1.7 × 10- 11 m2/s with increasing MW. DOM with MW > 2 kDa was affected by both retardation and straining that increased significantly with increasing MW while apparent dispersion coefficients decreased. Values ranging from 1.36 to 19.6 were determined for the capacity factor and 3.2 × 10- 11 to 1.0 × 10- 11 m2/s for the apparent dispersion coefficient for species with 2.2 kDa < MW < 9.3 kDa. Straining resulted in an immobilisation of in average 49 ± 6% of the injected 9.3 kDa species. Our findings show that an accurate description of DOM transport requires the consideration of the size effects.

  10. Model-based estimation of individual fitness

    USGS Publications Warehouse

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw and Caswell, 1996).

  11. Model-based estimation of individual fitness

    USGS Publications Warehouse

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla ) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw & Caswell, 1996).

  12. 13 CFR 121.1009 - What are the procedures for making the size determination?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... the size determination? 121.1009 Section 121.1009 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION SMALL BUSINESS SIZE REGULATIONS Size Eligibility Provisions and Standards Procedures for Size.... The concern whose size is under consideration has the burden of establishing its small business size...

  13. 13 CFR 121.1009 - What are the procedures for making the size determination?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... the size determination? 121.1009 Section 121.1009 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION SMALL BUSINESS SIZE REGULATIONS Size Eligibility Provisions and Standards Procedures for Size.... The concern whose size is under consideration has the burden of establishing its small business size...

  14. 13 CFR 121.1009 - What are the procedures for making the size determination?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... the size determination? 121.1009 Section 121.1009 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION SMALL BUSINESS SIZE REGULATIONS Size Eligibility Provisions and Standards Procedures for Size.... The concern whose size is under consideration has the burden of establishing its small business size...

  15. Proton-Induced X-Ray Emission Analysis of Crematorium Emissions

    NASA Astrophysics Data System (ADS)

    Ali, Salina; Nadareski, Benjamin; Safiq, Alexandrea; Smith, Jeremy; Yoskowitz, Josh; Labrake, Scott; Vineyard, Michael

    2013-10-01

    There has been considerable concern in recent years about possible mercury emissions from crematoria. We have performed a particle-induced X-ray emission (PIXE) analysis of atmospheric aerosol samples collected on the roof of the crematorium at Vale Cemetery in Schenectady, NY, to address this concern. The samples were collected with a nine-stage cascade impactor that separates the particulate matter according to particle size. The aerosol samples were bombarded with 2.2-MeV protons from the Union College 1.1-MV Pelletron Accelerator. The emitted X-rays were detected with a silicon drift detector and the X-ray energy spectra were analyzed using GUPIX software to determine the elemental concentrations. We measured significant concentrations of sulfur, phosphorus, potassium, calcium, and iron, but essentially no mercury. The lower limit of detection for mercury in this experiment was approximately 0.2 ng/m3. We will describe the experimental procedure, discuss the PIXE analysis, and present preliminary results.

  16. Optimum allocation for a dual-frame telephone survey.

    PubMed

    Wolter, Kirk M; Tao, Xian; Montgomery, Robert; Smith, Philip J

    2015-12-01

    Careful design of a dual-frame random digit dial (RDD) telephone survey requires selecting from among many options that have varying impacts on cost, precision, and coverage in order to obtain the best possible implementation of the study goals. One such consideration is whether to screen cell-phone households in order to interview cell-phone only (CPO) households and exclude dual-user household, or to take all interviews obtained via the cell-phone sample. We present a framework in which to consider the tradeoffs between these two options and a method to select the optimal design. We derive and discuss the optimum allocation of sample size between the two sampling frames and explore the choice of optimum p , the mixing parameter for the dual-user domain. We illustrate our methods using the National Immunization Survey , sponsored by the Centers for Disease Control and Prevention.

  17. Abundance, size distributions and trace-element binding of organic and iron-rich nanocolloids in Alaskan rivers, as revealed by field-flow fractionation and ICP-MS

    NASA Astrophysics Data System (ADS)

    Stolpe, Björn; Guo, Laodong; Shiller, Alan M.; Aiken, George R.

    2013-03-01

    Water samples were collected from six small rivers in the Yukon River basin in central Alaska to examine the role of colloids and organic matter in the transport of trace elements in Northern high latitude watersheds influenced by permafrost. Concentrations of dissolved organic carbon (DOC), selected elements (Al, Si, Ca, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Rb, Sr, Ba, Pb, U), and UV-absorbance spectra were measured in 0.45 μm filtered samples. 'Nanocolloidal size distributions' (0.5-40 nm, hydrodynamic diameter) of humic-type and chromophoric dissolved organic matter (CDOM), Cr, Mn, Fe, Co, Ni, Cu, Zn, and Pb were determined by on-line coupling of flow field-flow fractionation (FFF) to detectors including UV-absorbance, fluorescence, and ICP-MS. Total dissolved and nanocolloidal concentrations of the elements varied considerably between the rivers and between spring flood and late summer base flow. Data on specific UV-absorbance (SUVA), spectral slopes, and the nanocolloidal fraction of the UV-absorbance indicated a decrease in aromaticity and size of CDOM from spring flood to late summer. The nanocolloidal size distributions indicated the presence of different 'components' of nanocolloids. 'Fulvic-rich nanocolloids' had a hydrodynamic diameter of 0.5-3 nm throughout the sampling season; 'organic/iron-rich nanocolloids' occurred in the <8 nm size range during the spring flood; whereas 'iron-rich nanocolloids' formed a discrete 4-40 nm components during summer base flow. Mn, Co, Ni, Cu and Zn were distributed between the nanocolloid components depending on the stability constant of the metal (+II)-organic complexes, while stronger association of Cr to the iron-rich nanocolloids was attributed to the higher oxidation states of Cr (+III or +IV). Changes in total dissolved element concentrations, size and composition of CDOM, and occurrence and size of organic/iron and iron-rich nanocolloids were related to variations in their sources from either the upper organic-rich soil or the deeper mineral layer, depending on seasonal variations in hydrological flow patterns and permafrost dynamics.

  18. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization.

    PubMed

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between logistical constraints and additional sampling performance should be carefully evaluated.

  19. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    PubMed Central

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between logistical constraints and additional sampling performance should be carefully evaluated. PMID:28334046

  20. Monitoring nekton as a bioindicator in shallow estuarine habitats

    USGS Publications Warehouse

    Raposa, K.B.; Roman, C.T.; Heltshe, J.F.

    2003-01-01

    Long-term monitoring of estuarine nekton has many practical and ecological benefits but efforts are hampered by a lack of standardized sampling procedures. This study provides a rationale for monitoring nekton in shallow (< 1 m), temperate, estuarine habitats and addresses some important issues that arise when developing monitoring protocols. Sampling in seagrass and salt marsh habitats is emphasized due to the susceptibility of each habitat to anthropogenic stress and to the abundant and rich nekton assemblages that each habitat supports. Extensive sampling with quantitative enclosure traps that estimate nekton density is suggested. These gears have a high capture efficiency in most habitats and are small enough (e.g., 1 m(2)) to permit sampling in specific microhabitats. Other aspects of nekton monitoring are discussed, including spatial and temporal sampling considerations, station selection, sample size estimation, and data collection and analysis. Developing and initiating long-term nekton monitoring programs will help evaluate natural and human-induced changes in estuarine nekton over time and advance our understanding of the interactions between nekton and the dynamic estuarine environment.

  1. Clinical Trials Targeting Aging and Age-Related Multimorbidity

    PubMed Central

    Crimmins, Eileen M; Grossardt, Brandon R; Crandall, Jill P; Gelfond, Jonathan A L; Harris, Tamara B; Kritchevsky, Stephen B; Manson, JoAnn E; Robinson, Jennifer G; Rocca, Walter A; Temprosa, Marinella; Thomas, Fridtjof; Wallace, Robert; Barzilai, Nir

    2017-01-01

    Abstract Background There is growing interest in identifying interventions that may increase health span by targeting biological processes underlying aging. The design of efficient and rigorous clinical trials to assess these interventions requires careful consideration of eligibility criteria, outcomes, sample size, and monitoring plans. Methods Experienced geriatrics researchers and clinical trialists collaborated to provide advice on clinical trial design. Results Outcomes based on the accumulation and incidence of age-related chronic diseases are attractive for clinical trials targeting aging. Accumulation and incidence rates of multimorbidity outcomes were developed by selecting at-risk subsets of individuals from three large cohort studies of older individuals. These provide representative benchmark data for decisions on eligibility, duration, and assessment protocols. Monitoring rules should be sensitive to targeting aging-related, rather than disease-specific, outcomes. Conclusions Clinical trials targeting aging are feasible, but require careful design consideration and monitoring rules. PMID:28364543

  2. Study design in medical research: part 2 of a series on the evaluation of scientific publications.

    PubMed

    Röhrig, Bernd; du Prel, Jean-Baptist; Blettner, Maria

    2009-03-01

    The scientific value and informativeness of a medical study are determined to a major extent by the study design. Errors in study design cannot be corrected afterwards. Various aspects of study design are discussed in this article. Six essential considerations in the planning and evaluation of medical research studies are presented and discussed in the light of selected scientific articles from the international literature as well as the authors' own scientific expertise with regard to study design. The six main considerations for study design are the question to be answered, the study population, the unit of analysis, the type of study, the measuring technique, and the calculation of sample size. This article is intended to give the reader guidance in evaluating the design of studies in medical research. This should enable the reader to categorize medical studies better and to assess their scientific quality more accurately.

  3. Spousal interrelations in happiness in the Seattle Longitudinal Study: considerable similarities in levels and change over time.

    PubMed

    Hoppmann, Christiane A; Gerstorf, Denis; Willis, Sherry L; Schaie, K Warner

    2011-01-01

    Development does not take place in isolation and is often interrelated with close others such as marital partners. To examine interrelations in spousal happiness across midlife and old age, we used 35-year longitudinal data from both members of 178 married couples in the Seattle Longitudinal Study. Latent growth curve models revealed sizeable spousal similarities not only in levels of happiness but also in how happiness changed over time. These spousal interrelations were considerably larger in size than those found among random pairs of women and men from the same sample. Results are in line with life-span theories emphasizing an interactive minds perspective by showing that adult happiness waxes and wanes in close association with the respective spouse. Our findings also complement previous individual-level work on age-related changes in well-being by pointing to the importance of using the couple as the unit of analysis.

  4. Diolistics: incorporating fluorescent dyes into biological samples using a gene gun

    PubMed Central

    O’Brien, John A.; Lummis, Sarah C.R.

    2007-01-01

    The hand-held gene gun provides a rapid and efficient method of incorporating fluorescent dyes into cells, a technique that is becoming known as diolistics. Transporting fluorescent dyes into cells has, in the past, used predominantly injection or chemical methods. The use of the gene gun, combined with the new generation of fluorescent dyes, circumvents some of the problems of using these methods and also enables the study of cells that have proved difficult traditionally to transfect (e.g. those deep in tissues and/or terminally differentiated); in addition, the use of ion- or metabolite-sensitive dyes provides a route to study cellular mechanisms. Diolistics is also ideal for loading cells with optical nanosensors – nanometre-sized sensors linked to fluorescent probes. Here, we discuss the theoretical considerations of using diolistics, the advantages compared with other methods of inserting dyes into cells and the current uses of the technique, with particular consideration of nanosensors. PMID:17945370

  5. Modeling motor vehicle crashes using Poisson-gamma models: examining the effects of low sample mean values and small sample size on the estimation of the fixed dispersion parameter.

    PubMed

    Lord, Dominique

    2006-07-01

    There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum likelihood method. In an attempt to complement the outcome of the simulation study, Poisson-gamma models were fitted to crash data collected in Toronto, Ont. characterized by a low sample mean and small sample size. The study shows that a low sample mean combined with a small sample size can seriously affect the estimation of the dispersion parameter, no matter which estimator is used within the estimation process. The probability the dispersion parameter becomes unreliably estimated increases significantly as the sample mean and sample size decrease. Consequently, the results show that an unreliably estimated dispersion parameter can significantly undermine empirical Bayes (EB) estimates as well as the estimation of confidence intervals for the gamma mean and predicted response. The paper ends with recommendations about minimizing the likelihood of producing Poisson-gamma models with an unreliable dispersion parameter for modeling motor vehicle crashes.

  6. Performance Evaluation of Particle Sampling Probes for Emission Measurements of Aircraft Jet Engines

    NASA Technical Reports Server (NTRS)

    Lee, Poshin; Chen, Da-Ren; Sanders, Terry (Technical Monitor)

    2001-01-01

    Considerable attention has been recently received on the impact of aircraft-produced aerosols upon the global climate. Sampling particles directly from jet engines has been performed by different research groups in the U.S. and Europe. However, a large variation has been observed among published data on the conversion efficiency and emission indexes of jet engines. The variation results surely from the differences in test engine types, engine operation conditions, and environmental conditions. The other factor that could result in the observed variation is the performance of sampling probes used. Unfortunately, it is often neglected in the jet engine community. Particle losses during the sampling, transport, and dilution processes are often not discussed/considered in literatures. To address this issue, we evaluated the performance of one sampling probe by challenging it with monodisperse particles. A significant performance difference was observed on the sampling probe evaluated under different temperature conditions. Thermophoretic effect, nonisokinetic sampling and turbulence loss contribute to the loss of particles in sampling probes. The results of this study show that particle loss can be dramatic if the sampling probe is not well designed. Further, the result allows ones to recover the actual size distributions emitted from jet engines.

  7. [Monitoring microbiological safety of small systems of water distribution. Comparison of two sampling programs in a town in central Italy].

    PubMed

    Papini, Paolo; Faustini, Annunziata; Manganello, Rosa; Borzacchi, Giancarlo; Spera, Domenico; Perucci, Carlo A

    2005-01-01

    To determine the frequency of sampling in small water distribution systems (<5,000 inhabitants) and compare the results according to different hypotheses in bacteria distribution. We carried out two sampling programs to monitor the water distribution system in a town in Central Italy between July and September 1992; the Poisson distribution assumption implied 4 water samples, the assumption of negative binomial distribution implied 21 samples. Coliform organisms were used as indicators of water safety. The network consisted of two pipe rings and two wells fed by the same water source. The number of summer customers varied considerably from 3,000 to 20,000. The mean density was 2.33 coliforms/100 ml (sd= 5.29) for 21 samples and 3 coliforms/100 ml (sd= 6) for four samples. However the hypothesis of homogeneity was rejected (p-value <0.001) and the probability of II type error with the assumption of heterogeneity was higher with 4 samples (beta= 0.24) than with 21 (beta= 0.05). For this small network, determining the samples' size according to heterogeneity hypothesis strengthens the statement that water is drinkable compared with homogeneity assumption.

  8. Using simulation to aid trial design: Ring-vaccination trials.

    PubMed

    Hitchings, Matt David Thomas; Grais, Rebecca Freeman; Lipsitch, Marc

    2017-03-01

    The 2014-6 West African Ebola epidemic highlights the need for rigorous, rapid clinical trial methods for vaccines. A challenge for trial design is making sample size calculations based on incidence within the trial, total vaccine effect, and intracluster correlation, when these parameters are uncertain in the presence of indirect effects of vaccination. We present a stochastic, compartmental model for a ring vaccination trial. After identification of an index case, a ring of contacts is recruited and either vaccinated immediately or after 21 days. The primary outcome of the trial is total vaccine effect, counting cases only from a pre-specified window in which the immediate arm is assumed to be fully protected and the delayed arm is not protected. Simulation results are used to calculate necessary sample size and estimated vaccine effect. Under baseline assumptions about vaccine properties, monthly incidence in unvaccinated rings and trial design, a standard sample-size calculation neglecting dynamic effects estimated that 7,100 participants would be needed to achieve 80% power to detect a difference in attack rate between arms, while incorporating dynamic considerations in the model increased the estimate to 8,900. This approach replaces assumptions about parameters at the ring level with assumptions about disease dynamics and vaccine characteristics at the individual level, so within this framework we were able to describe the sensitivity of the trial power and estimated effect to various parameters. We found that both of these quantities are sensitive to properties of the vaccine, to setting-specific parameters over which investigators have little control, and to parameters that are determined by the study design. Incorporating simulation into the trial design process can improve robustness of sample size calculations. For this specific trial design, vaccine effectiveness depends on properties of the ring vaccination design and on the measurement window, as well as the epidemiologic setting.

  9. Elastomer modulus and dielectric strength scaling with sample thickness

    NASA Astrophysics Data System (ADS)

    Larson, Kent

    2015-04-01

    Material characteristics such as adhesion and dielectric strength have well recognized dependencies on material thickness. There is disagreement, however, on the scale: the long held dictum that dielectric strength is inversely proportional to the square root of sample thickness has been shown to not always hold true for all materials, nor for all possible thickness regions. In D-EAP applications some studies have postulated a "critical thickness" below which properties show significantly less thickness dependency. While a great deal of data is available for dielectric strength, other properties are not nearly as well documented as samples get thinner. In particular, elastic modulus has been found to increase and elongation to decrease as sample thickness is lowered. This trend can be observed experimentally, but has been rarely reported and certainly does not appear in typical suppliers' product data sheets. Both published and newly generated data were used to study properties such as elastic modulus and dielectric strength vs sample thickness in silicone elastomers. Several theories are examined to explain such behavior, such as the impact of defect size and of common (but not well reported) concentration gradients that occur during elastomer curing that create micron-sized layers at the upper and lower interfaces with divergent properties to the bulk material. As Dielectric Electro-Active Polymer applications strive to lower and lower material thickness, changing mechanical properties must be recognized and taken into consideration for accurate electro-mechanical predictions of performance.

  10. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    PubMed

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Controlled creation and stability of k π skyrmions on a discrete lattice

    NASA Astrophysics Data System (ADS)

    Hagemeister, Julian; Siemens, Ansgar; Rózsa, Levente; Vedmedenko, Elena Y.; Wiesendanger, Roland

    2018-05-01

    We determine sizes and activation energies of k π skyrmions on a discrete lattice using the Landau-Lifshitz-Gilbert equation and the geodesic nudged elastic band method. The employed atomic material parameters are based on the skyrmionic material system Pd/Fe/Ir(111). We find that the critical magnetic fields for collapse of the 2 π skyrmion and 3 π skyrmion are very close to each other and considerably lower than the critical field of the 1 π skyrmion. The activation energy protecting the structures does not strictly decrease with increasing k as it can be larger for the 3 π skyrmion than for the 2 π skyrmion depending on the applied magnetic field. Furthermore, we propose a method of switching the skyrmion order k by a reversion of the magnetic field direction in samples of finite size.

  12. Effect of Cu2+ substitution on the structural, optical and magnetic behaviour ofchemically derived manganese ferrite nanoparticles

    NASA Astrophysics Data System (ADS)

    Vasuki, G.; Balu, T.

    2018-06-01

    Mixed spinel copper manganese ferrite (CuXMn1‑XFe2O4, X = 0, 0.25, 0.5, 0.75, 1) nanoparticles were synthesized by chemical co-precipitation technique. From the powder x-ray diffraction analysis the lattice constant, volume of unit cell, x-ray density, hopping lengths, crystallite size, surface area, dislocation density and microstrain were calculated. The substitution of Cu2+ ions shows a considerable reduction in the crystallite size of manganese ferrite from 34 nm to 22 nm. Further a linear fit of Williamson-Hall plot has been drawn to determine the microstrain and crystallite size. The crystallite size and morphology were further observed through high resolution transmission electron microscope and scanning electron microscope. The diffraction rings observed from selected area electron diffraction pattern exhibits the crystalline nature of all the samples. The energy dispersive x-ray analysis shows the composition of all the elements incorporated in the synthesized nanomaterials. FTIR studies reveal the absorption peaks that correspond to the metal-oxygen vibrations in the tetrahedral and octahedral sites. From the UV–vis absorption spectra the band gap energy, refractive index and optical dielectric constant were determined. Magnetic studies carried out using vibrating sample magnetometer shows interesting behaviour in the variation of magnetisation and coercivity. Peculiar magnetic behaviour is observed when Cu2+ ions are substituted in manganese ferrites. All the synthesized materials have very low value of squareness ratio which attributes to the superparamagnetic behaviour.

  13. A method of determining where to target surveillance efforts in heterogeneous epidemiological systems

    PubMed Central

    van den Bosch, Frank; Gottwald, Timothy R.; Alonso Chavez, Vasthi

    2017-01-01

    The spread of pathogens into new environments poses a considerable threat to human, animal, and plant health, and by extension, human and animal wellbeing, ecosystem function, and agricultural productivity, worldwide. Early detection through effective surveillance is a key strategy to reduce the risk of their establishment. Whilst it is well established that statistical and economic considerations are of vital importance when planning surveillance efforts, it is also important to consider epidemiological characteristics of the pathogen in question—including heterogeneities within the epidemiological system itself. One of the most pronounced realisations of this heterogeneity is seen in the case of vector-borne pathogens, which spread between ‘hosts’ and ‘vectors’—with each group possessing distinct epidemiological characteristics. As a result, an important question when planning surveillance for emerging vector-borne pathogens is where to place sampling resources in order to detect the pathogen as early as possible. We answer this question by developing a statistical function which describes the probability distributions of the prevalences of infection at first detection in both hosts and vectors. We also show how this method can be adapted in order to maximise the probability of early detection of an emerging pathogen within imposed sample size and/or cost constraints, and demonstrate its application using two simple models of vector-borne citrus pathogens. Under the assumption of a linear cost function, we find that sampling costs are generally minimised when either hosts or vectors, but not both, are sampled. PMID:28846676

  14. Analysis of environmental microplastics by vibrational microspectroscopy: FTIR, Raman or both?

    PubMed

    Käppler, Andrea; Fischer, Dieter; Oberbeckmann, Sonja; Schernewski, Gerald; Labrenz, Matthias; Eichhorn, Klaus-Jochen; Voit, Brigitte

    2016-11-01

    The contamination of aquatic ecosystems with microplastics has recently been reported through many studies, and negative impacts on the aquatic biota have been described. For the chemical identification of microplastics, mainly Fourier transform infrared (FTIR) and Raman spectroscopy are used. But up to now, a critical comparison and validation of both spectroscopic methods with respect to microplastics analysis is missing. To close this knowledge gap, we investigated environmental samples by both Raman and FTIR spectroscopy. Firstly, particles and fibres >500 μm extracted from beach sediment samples were analysed by Raman and FTIR microspectroscopic single measurements. Our results illustrate that both methods are in principle suitable to identify microplastics from the environment. However, in some cases, especially for coloured particles, a combination of both spectroscopic methods is necessary for a complete and reliable characterisation of the chemical composition. Secondly, a marine sample containing particles <400 μm was investigated by Raman imaging and FTIR transmission imaging. The results were compared regarding number, size and type of detectable microplastics as well as spectra quality, measurement time and handling. We show that FTIR imaging leads to significant underestimation (about 35 %) of microplastics compared to Raman imaging, especially in the size range <20 μm. However, the measurement time of Raman imaging is considerably higher compared to FTIR imaging. In summary, we propose a further size division within the smaller microplastics fraction into 500-50 μm (rapid and reliable analysis by FTIR imaging) and into 50-1 μm (detailed and more time-consuming analysis by Raman imaging). Graphical Abstract Marine microplastic sample (fraction <400 μm) on a silicon filter (middle) with the corresponding Raman and IR images.

  15. Fitting models of continuous trait evolution to incompletely sampled comparative data using approximate Bayesian computation.

    PubMed

    Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E

    2012-03-01

    In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  16. Value of lower respiratory tract surveillance cultures to predict bacterial pathogens in ventilator-associated pneumonia: systematic review and diagnostic test accuracy meta-analysis.

    PubMed

    Brusselaers, Nele; Labeau, Sonia; Vogelaers, Dirk; Blot, Stijn

    2013-03-01

    In ventilator-associated pneumonia (VAP), early appropriate antimicrobial therapy may be hampered by involvement of multidrug-resistant (MDR) pathogens. A systematic review and diagnostic test accuracy meta-analysis were performed to analyse whether lower respiratory tract surveillance cultures accurately predict the causative pathogens of subsequent VAP in adult patients. Selection and assessment of eligibility were performed by three investigators by mutual consideration. Of the 525 studies retrieved, 14 were eligible for inclusion (all in English; published since 1994), accounting for 791 VAP episodes. The following data were collected: study and population characteristics; in- and exclusion criteria; diagnostic criteria for VAP; microbiological workup of surveillance and diagnostic VAP cultures. Sub-analyses were conducted for VAP caused by Staphylococcus aureus, Pseudomonas spp., and Acinetobacter spp., MDR microorganisms, frequency of sampling, and consideration of all versus the most recent surveillance cultures. The meta-analysis showed a high accuracy of surveillance cultures, with pooled sensitivities up to 0.75 and specificities up to 0.92 in culture-positive VAP. The area under the curve (AUC) of the hierarchical summary receiver-operating characteristic curve demonstrates moderate accuracy (AUC: 0.90) in predicting multidrug resistance. A sampling frequency of >2/week (sensitivity 0.79; specificity 0.96) and consideration of only the most recent surveillance culture (sensitivity 0.78; specificity 0.96) are associated with a higher accuracy of prediction. This study provides evidence for the benefit of surveillance cultures in predicting MDR bacterial pathogens in VAP. However, clinical and statistical heterogeneity, limited samples sizes, and bias remain important limitations of this meta-analysis.

  17. Redshift differences of galaxies in nearby groups

    NASA Technical Reports Server (NTRS)

    Harrison, E. R.

    1975-01-01

    It is reported that galaxies in nearby groups exhibit anomalous nonvelocity redshifts. In this discussion, (1) four classes of nearby groups of galacies are analyzed, and no significant nonvelocity redshift effect is found; and (2) it is pointed out that transverse velocities (i.e., velocities transverse to the line of sight of the main galaxy, or center of mass) contribute components to the redshift measurements of companion galaxies. The redshifts of galaxies in nearby groups of appreciable angular size are considerably affected by these velocity projection effects. The transverse velocity contributions average out in rich, isotropic groups, and also in large samples of irregular groups of low membership, as in the four classes referred to in (1), but can introduce apparent discrepancies in small samples (as studied by Arp) of nearby groups of low membership.

  18. Using fiberglass volumes for VPI of superconductive magnetic systems’ insulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andreev, I. S.; Bezrukov, A. A.; Pischugin, A. B.

    2014-01-29

    The paper describes the method of manufacturing fiberglass molds for vacuum pressure impregnation (VPI) of high-voltage insulation of superconductive magnetic systems (SMS) with epoxidian hot-setting compounds. The basic advantages of using such vacuum volumes are improved quality of insulation impregnation in complex-shaped areas, and considerable cost-saving of preparing VPI of large-sized components due to dispensing with the stage of fabricating a metal impregnating volume. Such fiberglass vacuum molds were used for VPI of high-voltage insulation samples of an ITER reactor’s PF1 poloidal coil. Electric insulation of these samples has successfully undergone a wide range of high-voltage and mechanical tests atmore » room and cryogenic temperatures. Some results of the tests are also given in this paper.« less

  19. Optimizing adaptive design for Phase 2 dose-finding trials incorporating long-term success and financial considerations: A case study for neuropathic pain.

    PubMed

    Gao, Jingjing; Nangia, Narinder; Jia, Jia; Bolognese, James; Bhattacharyya, Jaydeep; Patel, Nitin

    2017-06-01

    In this paper, we propose an adaptive randomization design for Phase 2 dose-finding trials to optimize Net Present Value (NPV) for an experimental drug. We replace the traditional fixed sample size design (Patel, et al., 2012) by this new design to see if NPV from the original paper can be improved. Comparison of the proposed design to the previous design is made via simulations using a hypothetical example based on a Diabetic Neuropathic Pain Study. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Lessons learned in research: an attempt to study the effects of magnetic therapy.

    PubMed

    Szor, Judy K; Holewinski, Paul

    2002-02-01

    Difficulties related to chronic wound healing research are frequently discussed, but results of less-than-perfect studies commonly are not published. A 16-week, randomized controlled double-blind study attempted to investigate the effect of static magnetic therapy on the healing of diabetic foot ulcers. Of 56 subjects, 37 completed the study. Because of the small sample size, randomization did not control for differences between the two groups, and the data could not be analyzed in any meaningful way. The challenges of performing magnetic therapy research are discussed and considerations for future studies are noted.

  1. Multicanonical hybrid Monte Carlo algorithm: Boosting simulations of compact QED

    NASA Astrophysics Data System (ADS)

    Arnold, G.; Schilling, K.; Lippert, Th.

    1999-03-01

    We demonstrate that substantial progress can be achieved in the study of the phase structure of four-dimensional compact QED by a joint use of hybrid Monte Carlo and multicanonical algorithms through an efficient parallel implementation. This is borne out by the observation of considerable speedup of tunnelling between the metastable states, close to the phase transition, on the Wilson line. We estimate that the creation of adequate samples (with order 100 flip-flops) becomes a matter of half a year's run time at 2 Gflops sustained performance for lattices of size up to 244.

  2. Fully automatic characterization and data collection from crystals of biological macromolecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Svensson, Olof; Malbet-Monaco, Stéphanie; Popov, Alexander

    A fully automatic system has been developed that performs X-ray centring and characterization of, and data collection from, large numbers of cryocooled crystals without human intervention. Considerable effort is dedicated to evaluating macromolecular crystals at synchrotron sources, even for well established and robust systems. Much of this work is repetitive, and the time spent could be better invested in the interpretation of the results. In order to decrease the need for manual intervention in the most repetitive steps of structural biology projects, initial screening and data collection, a fully automatic system has been developed to mount, locate, centre to themore » optimal diffraction volume, characterize and, if possible, collect data from multiple cryocooled crystals. Using the capabilities of pixel-array detectors, the system is as fast as a human operator, taking an average of 6 min per sample depending on the sample size and the level of characterization required. Using a fast X-ray-based routine, samples are located and centred systematically at the position of highest diffraction signal and important parameters for sample characterization, such as flux, beam size and crystal volume, are automatically taken into account, ensuring the calculation of optimal data-collection strategies. The system is now in operation at the new ESRF beamline MASSIF-1 and has been used by both industrial and academic users for many different sample types, including crystals of less than 20 µm in the smallest dimension. To date, over 8000 samples have been evaluated on MASSIF-1 without any human intervention.« less

  3. Genome Size Variation in the Genus Carthamus (Asteraceae, Cardueae): Systematic Implications and Additive Changes During Allopolyploidization

    PubMed Central

    GARNATJE, TERESA; GARCIA, SÒNIA; VILATERSANA, ROSER; VALLÈS, JOAN

    2006-01-01

    • Background and Aims Plant genome size is an important biological characteristic, with relationships to systematics, ecology and distribution. Currently, there is no information regarding nuclear DNA content for any Carthamus species. In addition to improving the knowledge base, this research focuses on interspecific variation and its implications for the infrageneric classification of this genus. Genome size variation in the process of allopolyploid formation is also addressed. • Methods Nuclear DNA samples from 34 populations of 16 species of the genus Carthamus were assessed by flow cytometry using propidium iodide. • Key Results The 2C values ranged from 2·26 pg for C. leucocaulos to 7·46 pg for C. turkestanicus, and monoploid genome size (1Cx-value) ranged from 1·13 pg in C. leucocaulos to 1·53 pg in C. alexandrinus. Mean genome sizes differed significantly, based on sectional classification. Both allopolyploid species (C. creticus and C. turkestanicus) exhibited nuclear DNA contents in accordance with the sum of the putative parental C-values (in one case with a slight reduction, frequent in polyploids), supporting their hybrid origin. • Conclusions Genome size represents a useful tool in elucidating systematic relationships between closely related species. A considerable reduction in monoploid genome size, possibly due to the hybrid formation, is also reported within these taxa. PMID:16390843

  4. Cross-seasonal effects on waterfowl productivity: Implications under climate change

    USGS Publications Warehouse

    Osnas, Erik; Zhao, Qing; Runge, Michael C.; Boomer, G Scott

    2016-01-01

    Previous efforts to relate winter-ground precipitation to subsequent reproductive success as measured by the ratio of juveniles to adults in the autumn failed to account for increased vulnerability of juvenile ducks to hunting and uncertainty in the estimated age ratio. Neglecting increased juvenile vulnerability will positively bias the mean productivity estimate, and neglecting increased vulnerability and estimation uncertainty will positively bias the year-to-year variance in productivity because raw age ratios are the product of sampling variation, the year-specific vulnerability, and year-specific reproductive success. Therefore, we estimated the effects of cumulative winter precipitation in the California Central Valley and the Mississippi Alluvial Valley on pintail (Anas acuta) and mallard (Anas platyrhnchos) reproduction, respectively, using hierarchical Bayesian methods to correct for sampling bias in productivity estimates and observation error in covariates. We applied the model to a hunter-collected parts survey implemented by the United States Fish and Wildlife Service and band recoveries reported to the United States Geological Survey Bird Banding Laboratory using data from 1961 to 2013. We compared our results to previous estimates that used simple linear regression on uncorrected age ratios from a smaller subset of years in pintail (1961–1985). Like previous analyses, we found large and consistent effects of population size and wetland conditions in prairie Canada on mallard productivity, and large effects of population size and mean latitude of the observed breeding population on pintail productivity. Unlike previous analyses, we report a large amount of uncertainty in the estimated effects of wintering-ground precipitation on pintail and mallard productivity, with considerable uncertainty in the sign of the estimated main effect, although the posterior medians of precipitation effects were consistent with past studies. We found more consistent estimates in the sign of an interaction effect between population size and precipitation, suggesting that wintering-ground precipitation has a larger effect in years of high population size, especially for pintail. When we used the estimated effects in a population model to derive a sustainable harvest and population size projection (i.e., a yield curve), there was considerable uncertainty in the effect of increased or decreased wintering-ground precipitation on sustainable harvest potential and population size. These results suggest that the mechanism of cross-seasonal effects between winter habitat and reproduction in ducks occurs through a reduction in the strength of density dependence in years of above-average wintering-ground precipitation. We suggest additional investigation of the underlying mechanisms and that habitat managers and decision-makers consider the level of uncertainty in these estimates when attempting to integrate habitat management and harvest management decisions. Collection of annual data on the status of wintering-ground habitat in a rigorous sampling framework would likely be the most direct way to improve understanding of mechanisms and inform management.

  5. The Effect of Novel Research Activities on Long-term Survival of Temporarily Captive Steller Sea Lions (Eumetopias jubatus)

    PubMed Central

    Shuert, Courtney; Horning, Markus; Mellish, Jo-Ann

    2015-01-01

    Two novel research approaches were developed to facilitate controlled access to, and long-term monitoring of, juvenile Steller sea lions for periods longer than typically afforded by traditional fieldwork. The Transient Juvenile Steller sea lion Project at the Alaska SeaLife Center facilitated nutritional, physiological, and behavioral studies on the platform of temporary captivity. Temporarily captive sea lions (TJs, n = 35) were studied, and were intraperitoneally implanted with Life History Transmitters (LHX tags) to determine causes of mortality post-release. Our goal was to evaluate the potential for long-term impacts of temporary captivity and telemetry implants on the survival of study individuals. A simple open-population Cormack-Jolly-Seber mark-recapture model was built in program MARK, incorporating resightings of uniquely branded study individuals gathered by several contributing institutions. A priori models were developed to weigh the evidence of effects of experimental treatment on survival with covariates of sex, age, capture age, cohort, and age class. We compared survival of experimental treatment to a control group of n = 27 free-ranging animals (FRs) that were sampled during capture events and immediately released. Sex has previously been show to differentially affect juvenile survival in Steller sea lions. Therefore, sex was included in all models to account for unbalanced sex ratios within the experimental group. Considerable support was identified for the effects of sex, accounting for over 71% of total weight for all a priori models with delta AICc <5, and over 91% of model weight after removal of pretending variables. Overall, most support was found for the most parsimonious model based on sex and excluding experimental treatment. Models including experimental treatment were not supported after post-hoc considerations of model selection criteria. However, given the limited sample size, alternate models including effects of experimental treatments remain possible and effects may yet become apparent in larger sample sizes. PMID:26580549

  6. Columbia River White Sturgeon Genetics and Early Life History: Population Segregation and Juvenile Feeding Behavior, 1987 Final Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brannon, Ernest L.

    1988-06-01

    The geographic area of the genetics study broadly covered the distribution range of sturgeon in the Columbia from below Bonneville Dam at Ilwaco at Lake Roosevelt, the Upper Snake River, and the Kootenai River. The two remote river sections provided data important for enhancement considerations. There was little electrophoretic variation seen among individuals from the Kootenai River. Upper Snake river sturgeon showed a higher percentage of polymorphic loci than the Kootenai fish, but lower than the other areas in the Columbia River we sampled. Sample size was increased in both Lake Roosevelt and at Electrophoretic variation was specific to anmore » individual sampling area in several cases and this shaped our conclusions. The 1987 early life history studies concentrated on the feeding behavior of juvenile sturgeon. The chemostimulant components in prey attractive to sturgeon were examined, and the sensory systems utilized by foraging sturgeon were determined under different environmental conditions. These results were discussed with regard to the environmental changes that have occurred in the Columbia River. Under present river conditions, the feeding mechanism of sturgeon is more restricted to certain prey types, and their feeding range may be limited. In these situations, enhancement measures cannot be undertaken without consideration given to the introduction of food resources that will be readily available under present conditions. 89 refs., 7 figs., 11 tabs.« less

  7. Quantifying ADHD classroom inattentiveness, its moderators, and variability: a meta-analytic review.

    PubMed

    Kofler, Michael J; Rapport, Mark D; Alderson, R Matt

    2008-01-01

    Most classroom observation studies have documented significant deficiencies in the classroom attention of children with attention-deficit/hyperactivity disorder (ADHD) compared to their typically developing peers. The magnitude of these differences, however, varies considerably and may be influenced by contextual, sampling, diagnostic, and observational differences. Meta-analysis of 23 between-group classroom observation studies using weighted regression, publication bias, goodness of fit, best case, and original metric analyses. Across studies, a large effect size (ES = .73) was found prior to consideration of potential moderators. Weighted regression, best case, and original metric estimation indicate that this effect may be an underestimation of the classroom visual attention deficits of children with ADHD. Several methodological factors-classroom environment, sample characteristics, diagnostic procedures, and observational coding schema-differentially affect observed rates of classroom attentive behavior for children with ADHD and typically developing children. After accounting for these factors, children with ADHD were on-task approximately 75% of the time compared to 88% for their classroom peers (ES = 1.40). Children with ADHD were also more variable in their attentive behavior across studies. The present study confirmed that children with ADHD exhibit deficient and more variable visual attending to required stimuli in classroom settings and provided an aggregate estimation of the magnitude of these deficits at the group level. It also demonstrated the impact of situational, sampling, diagnostic, and observational variables on observed rates of on-task behavior.

  8. Uncertainties in detecting decadal change in extractable soil elements in Northern Forests

    NASA Astrophysics Data System (ADS)

    Bartlett, O.; Bailey, S. W.; Ducey, M. J.

    2016-12-01

    Northern Forest ecosystems have been or are being impacted by land use change, forest harvesting, acid deposition, atmospheric CO2 enrichment, and climate change. Each of these has the potential to modify soil forming processes, and the resulting chemical stocks. Horizontal and vertical variations in concentrations complicate determination of temporal change. This study evaluates sample design, sample size, and differences among observers as sources of uncertainty when quantifying soil temporal change over regional scales. Forty permanent, northern hardwood, monitoring plots were established on the White Mountain National Forest in central New Hampshire and western Maine. Soil pits were characterized and sampled by genetic horizon at plot center in 2001 and resampled again in 2014 two-meters on contour from the original sampling location. Each soil horizon was characterized by depth, color, texture, structure, consistency, boundaries, coarse fragments, and roots from the forest floor to the upper C horizon, the relatively unaltered glacial till parent material. Laboratory analyses included pH in 0.01 M CaCl2 solution and extractable Ca, Mg, Na, K, Al, Mn, and P in 1 M NH4OAc solution buffered at pH 4.8. Significant elemental differences were identified by genetic horizon from paired t-tests (p ≤ 0.05) indicate temporal change across the study region. Power analysis, 0.9 power (α = 0.05), revealed sampling size was appropriate within this region to detect concentration change by genetic horizon using a stratified sample design based on topographic metrics. There were no significant differences between observers' descriptions of physical properties. As physical properties would not be expected to change over a decade, this suggests spatial variation in physical properties between the pairs of sampling pits did not detract from our ability to detect temporal change. These results suggest that resampling efforts within a site, repeated across a region, to quantify elemental change by carefully described genetic horizons is an appropriate method of detecting soil temporal change in this region. Sample size and design considerations from this project will have direct implications for future monitoring programs to characterize change in soil chemistry.

  9. An analysis of adaptive design variations on the sequential parallel comparison design for clinical trials.

    PubMed

    Mi, Michael Y; Betensky, Rebecca A

    2013-04-01

    Currently, a growing placebo response rate has been observed in clinical trials for antidepressant drugs, a phenomenon that has made it increasingly difficult to demonstrate efficacy. The sequential parallel comparison design (SPCD) is a clinical trial design that was proposed to address this issue. The SPCD theoretically has the potential to reduce the sample-size requirement for a clinical trial and to simultaneously enrich the study population to be less responsive to the placebo. Because the basic SPCD already reduces the placebo response by removing placebo responders between the first and second phases of a trial, the purpose of this study was to examine whether we can further improve the efficiency of the basic SPCD and whether we can do so when the projected underlying drug and placebo response rates differ considerably from the actual ones. Three adaptive designs that used interim analyses to readjust the length of study duration for individual patients were tested to reduce the sample-size requirement or increase the statistical power of the SPCD. Various simulations of clinical trials using the SPCD with interim analyses were conducted to test these designs through calculations of empirical power. From the simulations, we found that the adaptive designs can recover unnecessary resources spent in the traditional SPCD trial format with overestimated initial sample sizes and provide moderate gains in power. Under the first design, results showed up to a 25% reduction in person-days, with most power losses below 5%. In the second design, results showed up to a 8% reduction in person-days with negligible loss of power. In the third design using sample-size re-estimation, up to 25% power was recovered from underestimated sample-size scenarios. Given the numerous possible test parameters that could have been chosen for the simulations, the study's results are limited to situations described by the parameters that were used and may not generalize to all possible scenarios. Furthermore, dropout of patients is not considered in this study. It is possible to make an already complex design such as the SPCD adaptive, and thus more efficient, potentially overcoming the problem of placebo response at lower cost. Ultimately, such a design may expedite the approval of future effective treatments.

  10. An analysis of adaptive design variations on the sequential parallel comparison design for clinical trials

    PubMed Central

    Mi, Michael Y.; Betensky, Rebecca A.

    2013-01-01

    Background Currently, a growing placebo response rate has been observed in clinical trials for antidepressant drugs, a phenomenon that has made it increasingly difficult to demonstrate efficacy. The sequential parallel comparison design (SPCD) is a clinical trial design that was proposed to address this issue. The SPCD theoretically has the potential to reduce the sample size requirement for a clinical trial and to simultaneously enrich the study population to be less responsive to the placebo. Purpose Because the basic SPCD design already reduces the placebo response by removing placebo responders between the first and second phases of a trial, the purpose of this study was to examine whether we can further improve the efficiency of the basic SPCD and if we can do so when the projected underlying drug and placebo response rates differ considerably from the actual ones. Methods Three adaptive designs that used interim analyses to readjust the length of study duration for individual patients were tested to reduce the sample size requirement or increase the statistical power of the SPCD. Various simulations of clinical trials using the SPCD with interim analyses were conducted to test these designs through calculations of empirical power. Results From the simulations, we found that the adaptive designs can recover unnecessary resources spent in the traditional SPCD trial format with overestimated initial sample sizes and provide moderate gains in power. Under the first design, results showed up to a 25% reduction in person-days, with most power losses below 5%. In the second design, results showed up to a 8% reduction in person-days with negligible loss of power. In the third design using sample size re-estimation, up to 25% power was recovered from underestimated sample size scenarios. Limitations Given the numerous possible test parameters that could have been chosen for the simulations, the study’s results are limited to situations described by the parameters that were used, and may not generalize to all possible scenarios. Furthermore, drop-out of patients is not considered in this study. Conclusions It is possible to make an already complex design such as the SPCD adaptive, and thus more efficient, potentially overcoming the problem of placebo response at lower cost. Ultimately, such a design may expedite the approval of future effective treatments. PMID:23283576

  11. Preferential cytotoxicity of ZnO nanoparticle towards cervical cancer cells induced by ROS-mediated apoptosis and cell cycle arrest for cancer therapy

    NASA Astrophysics Data System (ADS)

    Sirelkhatim, Amna; Mahmud, Shahrom; Seeni, Azman; Kaus, Noor Haida Mohd

    2016-08-01

    The present study aimed to synthesize multifunctional ZnO-NP samples, namely ZnO-20, ZnO-40, and ZnO-80 nm, using different approaches, to be used as efficient anticancer agents. Systematic characterizations revealed their particle sizes and demonstrated nanostructures of nanorods (ZnO-80 nm) and nanogranules (ZnO-20 and ZnO-40 nm). They exhibited significant ( p < 0.05) toxicity to HeLa cancer cells. HeLa cell viabilities at 1 mM dose reduced to 37, 32, 15 %, by ZnO-80, ZnO-40, and ZnO-20 nm, respectively, at 48 h. However, the same dose exerted different effects of 79.6, 76, and 75 % on L929 normal cells at 48 h. Measurement of reactive oxygen species (ROS) showed a considerable ROS yields on HeLa cells by all samples with a pronounced percentage (50 %) displayed by ZnO-20 nm. Moreover, ROS-mediated apoptosis induction and blocked cell cycle progression in the S, G2/M, and G0/G1 phases significantly ( p < 0.05). Apoptosis induction was further confirmed by DNA fragmentation and Hoechst-PI costained images viewed under fluorescence microscope. Additionally, morphological changes of HeLa cells visualized under light microscope showed assortment of cell death involved shrinkage, vacuolization and apoptotic bodies' formation. Most importantly, results exposed the impact of size and morphology of ZnO samples on their toxicity to Hela cells mediated mainly by ROS production. ZnO-20 nm in disk form with its nanogranule shape and smallest particle size was the most toxic sample, followed by ZnO-40 nm and then ZnO-80 nm. An additional proposed mechanism contributed in the cell death herein was ZnO decomposition producing zinc ions (Zn2+) into the acidic cancer microenvironment due to the smaller sizes of ZnO-NPs. This mechanism has been adopted in the literatures as a size-dependent phenomenon. The emerged findings were suggested to provide new platforms in the development of therapeutics as selective agents to the fatal cervical cancer, and to benefit from the synergistic influence of size and nanostructure when designing anticancer agents.

  12. Effect of experimental and sample factors on dehydration kinetics of mildronate dihydrate: mechanism of dehydration and determination of kinetic parameters.

    PubMed

    Bērziņš, Agris; Actiņš, Andris

    2014-06-01

    The dehydration kinetics of mildronate dihydrate [3-(1,1,1-trimethylhydrazin-1-ium-2-yl)propionate dihydrate] was analyzed in isothermal and nonisothermal modes. The particle size, sample preparation and storage, sample weight, nitrogen flow rate, relative humidity, and sample history were varied in order to evaluate the effect of these factors and to more accurately interpret the data obtained from such analysis. It was determined that comparable kinetic parameters can be obtained in both isothermal and nonisothermal mode. However, dehydration activation energy values obtained in nonisothermal mode showed variation with conversion degree because of different rate-limiting step energy at higher temperature. Moreover, carrying out experiments in this mode required consideration of additional experimental complications. Our study of the different sample and experimental factor effect revealed information about changes of the dehydration rate-limiting step energy, variable contribution from different rate limiting steps, as well as clarified the dehydration mechanism. Procedures for convenient and fast determination of dehydration kinetic parameters were offered. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  13. A Kolmogorov-Smirnov test for the molecular clock based on Bayesian ensembles of phylogenies

    PubMed Central

    Antoneli, Fernando; Passos, Fernando M.; Lopes, Luciano R.

    2018-01-01

    Divergence date estimates are central to understand evolutionary processes and depend, in the case of molecular phylogenies, on tests of molecular clocks. Here we propose two non-parametric tests of strict and relaxed molecular clocks built upon a framework that uses the empirical cumulative distribution (ECD) of branch lengths obtained from an ensemble of Bayesian trees and well known non-parametric (one-sample and two-sample) Kolmogorov-Smirnov (KS) goodness-of-fit test. In the strict clock case, the method consists in using the one-sample Kolmogorov-Smirnov (KS) test to directly test if the phylogeny is clock-like, in other words, if it follows a Poisson law. The ECD is computed from the discretized branch lengths and the parameter λ of the expected Poisson distribution is calculated as the average branch length over the ensemble of trees. To compensate for the auto-correlation in the ensemble of trees and pseudo-replication we take advantage of thinning and effective sample size, two features provided by Bayesian inference MCMC samplers. Finally, it is observed that tree topologies with very long or very short branches lead to Poisson mixtures and in this case we propose the use of the two-sample KS test with samples from two continuous branch length distributions, one obtained from an ensemble of clock-constrained trees and the other from an ensemble of unconstrained trees. Moreover, in this second form the test can also be applied to test for relaxed clock models. The use of a statistically equivalent ensemble of phylogenies to obtain the branch lengths ECD, instead of one consensus tree, yields considerable reduction of the effects of small sample size and provides a gain of power. PMID:29300759

  14. Effects of Particle Size on the Attenuated Total Reflection Spectrum of Minerals.

    PubMed

    Udvardi, Beatrix; Kovács, István J; Fancsik, Tamás; Kónya, Péter; Bátori, Miklósné; Stercel, Ferenc; Falus, György; Szalai, Zoltán

    2017-06-01

    This study focuses on particle size effect on monomineralic powders recorded using attenuated total reflection Fourier transform infrared (ATR FT-IR) spectroscopy. Six particle size fractions of quartz, feldspar, calcite, and dolomite were prepared (<2, 2-4, 4-8, 8-16, 16-32, and 32-63 µm). It is found that the width, intensity, and area of bands in the ATR FT-IR spectra of minerals have explicit dependence on the particle size. As particle size increases, the intensity and area of IR bands usually decrease while the width of bands increases. The band positions usually shifted to higher wavenumbers with decreasing particle size. Infrared spectra of minerals are the most intensive in the particle size fraction of 2-4 µm. However, if the particle size is very small (<2 µm), due to the wavelength and penetration depth of the IR light, intensity decreases. Therefore, the quantity of very fine-grained minerals may be underestimated compared to the coarser phases. A nonlinear regression analysis of the data indicated that the average coefficients and indices of the power trend line equation imply a very simplistic relationship between median particle diameter and absorbance at a given wavenumber. It is concluded that when powder samples with substantially different particle size are compared, as in regression analysis for modal predictions using ATR FT-IR, it is also important to report the grain size distribution or surface area of samples. The band area of water (3000-3620 cm -1 ) is similar in each mineral fraction, except for the particles below 2 µm. It indicates that the finest particles could have disproportionately more water adsorbed on their larger surface area. Thus, these higher wavenumbers of the ATR FT-IR spectra may be more sensitive to this spectral interference if the number of particles below 2 µm is considerable. It is also concluded that at least a proportion of the moisture could be very adhesive to the particles due to the band shift towards lower wavenumbers in the IR range of 3000-3620 cm -1 .

  15. Methodological considerations for analyzing trabecular architecture: an example from the primate hand.

    PubMed

    Kivell, Tracy L; Skinner, Matthew M; Lazenby, Richard; Hublin, Jean-Jacques

    2011-02-01

    Micro-computed tomographic analyses of trabecular bone architecture have been used to clarify the link between positional behavior and skeletal anatomy in primates. However, there are methodological decisions associated with quantifying and comparing trabecular anatomy across taxa that vary greatly in body size and morphology that can affect characterizations of trabecular architecture, such as choice of the volume of interest (VOI) size and location. The potential effects of these decisions may be amplified in small, irregular-shaped bones of the hands and feet that have more complex external morphology and more heterogeneous trabecular structure compared to, for example, the spherical epiphysis of the femoral head. In this study we investigate the effects of changes in VOI size and location on standard trabecular parameters in two bones of the hand, the capitate and third metacarpal, in a diverse sample of nonhuman primates that vary greatly in morphology, body mass and positional behavior. Results demonstrate that changes in VOI location and, to a lesser extent, changes in VOI size had a dramatic affect on many trabecular parameters, especially trabecular connectivity and structure (rods vs. plates), degree of anisotropy, and the primary orientation of the trabeculae. Although previous research has shown that some trabecular parameters are susceptible to slight variations in methodology (e.g. VOI location, scan resolution), this study provides a quantification of these effects in hand bones of a diverse sample of primates. An a priori understanding of the inherent biases created by the choice of VOI size and particularly location is critical to robust trabecular analysis and functional interpretation, especially in small bones with complex arthroses. © 2010 The Authors. Journal of Anatomy © 2010 Anatomical Society of Great Britain and Ireland.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terai, Tsuyoshi; Takahashi, Jun; Itoh, Yoichi, E-mail: tsuyoshi.terai@nao.ac.jp

    Main-belt asteroids have been continuously colliding with one another since they were formed. Their size distribution is primarily determined by the size dependence of asteroid strength against catastrophic impacts. The strength scaling law as a function of body size could depend on collision velocity, but the relationship remains unknown, especially under hypervelocity collisions comparable to 10 km s{sup –1}. We present a wide-field imaging survey at an ecliptic latitude of about 25° for investigating the size distribution of small main-belt asteroids that have highly inclined orbits. The analysis technique allowing for efficient asteroid detections and high-accuracy photometric measurements provides sufficientmore » sample data to estimate the size distribution of sub-kilometer asteroids with inclinations larger than 14°. The best-fit power-law slopes of the cumulative size distribution are 1.25 ± 0.03 in the diameter range of 0.6-1.0 km and 1.84 ± 0.27 in 1.0-3.0 km. We provide a simple size distribution model that takes into consideration the oscillations of the power-law slope due to the transition from the gravity-scaled regime to the strength-scaled regime. We find that the high-inclination population has a shallow slope of the primary components of the size distribution compared to the low-inclination populations. The asteroid population exposed to hypervelocity impacts undergoes collisional processes where large bodies have a higher disruptive strength and longer lifespan relative to tiny bodies than the ecliptic asteroids.« less

  17. Considerations in choosing a primary endpoint that measures durability of virological suppression in an antiretroviral trial.

    PubMed

    Gilbert, P B; Ribaudo, H J; Greenberg, L; Yu, G; Bosch, R J; Tierney, C; Kuritzkes, D R

    2000-09-08

    At present, many clinical trials of anti-HIV-1 therapies compare treatments by a primary endpoint that measures the durability of suppression of HIV-1 replication. Several durability endpoints are compared. Endpoints are compared by their implicit assumptions regarding surrogacy for clinical outcomes, sample size requirements, and accommodations for inter-patient differences in baseline plasma HIV-1-RNA levels and in initial treatment response. Virological failure is defined by the non-suppression of virus levels at a prespecified follow-up time T(early virological failure), or by relapse. A binary virological failure endpoint is compared with three time-to-virological failure endpoints: time from (i) randomization that assigns early failures a failure time of T weeks; (ii) randomization that extends the early failure time T for slowly responding subjects; and (iii) virological response that assigns non-responders a failure time of 0 weeks. Endpoint differences are illustrated with Agouron's trial 511. In comparing high with low-dose nelfinavir (NFV) regimens in Agouron 511, the difference in Kaplan-Meier estimates of the proportion not failing by 24 weeks is 16.7% (P = 0.048), 6.5% (P = 0.29) and 22.9% (P = 0.0030) for endpoints (i), (ii) and (iii), respectively. The results differ because NFV suppresses virus more quickly at the higher dose, and the endpoints weigh this treatment difference differently. This illustrates that careful consideration needs to be given to choosing a primary endpoint that will detect treatment differences of interest. A time from randomization endpoint is usually recommended because of its advantages in flexibility and sample size, especially at interim analyses, and for its interpretation for patient management.

  18. Taphonomic bias in pollen and spore record: a review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisk, L.H.

    The high dispersibility and ease of pollen and spore transport have led researchers to conclude erroneously that fossil pollen and spore floras are relatively complete and record unbiased representations of the regional vegetation extant at the time of sediment deposition. That such conclusions are unjustified is obvious when the authors remember that polynomorphs are merely organic sedimentary particles and undergo hydraulic sorting not unlike clastic sedimentary particles. Prior to deposition in the fossil record, pollen and spores can be hydraulically sorted by size, shape, and weight, subtly biasing relative frequencies in fossil assemblages. Sorting during transport results in palynofloras whosemore » composition is environmentally dependent. Therefore, depositional environment is an important consideration to make correct inferences on the source vegetation. Sediment particle size of original rock samples may contain important information on the probability of a taphonomically biased pollen and spore assemblage. In addition, a reasonable test of hydraulic sorting is the distribution of pollen grain sizes and shapes in each assemblage. Any assemblage containing a wide spectrum of grain sizes and shapes has obviously not undergone significant sorting. If unrecognized, taphonomic bias can lead to paleoecologic, paleoclimatic, and even biostratigraphic misinterpretations.« less

  19. Consultant-Client Relationship and Knowledge Transfer in Small- and Medium-Sized Enterprises Change Processes.

    PubMed

    Martinez, Luis F; Ferreira, Aristides I; Can, Amina B

    2016-04-01

    Based on Szulanski's knowledge transfer model, this study examined how the communicational, motivational, and sharing of understanding variables influenced knowledge transfer and change processes in small- and medium-sized enterprises, particularly under projects developed by funded programs. The sample comprised 144 entrepreneurs, mostly male (65.3%) and mostly ages 35 to 45 years (40.3%), who filled an online questionnaire measuring the variables of "sharing of understanding," "motivation," "communication encoding competencies," "source credibility," "knowledge transfer," and "organizational change." Data were collected between 2011 and 2012 and measured the relationship between clients and consultants working in a Portuguese small- and medium-sized enterprise-oriented action learning program. To test the hypotheses, structural equation modeling was conducted to identify the antecedents of sharing of understanding, motivational, and communicational variables, which were positively correlated with the knowledge transfer between consultants and clients. This transfer was also positively correlated with organizational change. Overall, the study provides important considerations for practitioners and academicians and establishes new avenues for future studies concerning the issues of consultant-client relationship and the efficacy of Government-funded programs designed to improve performance of small- and medium-sized enterprises. © The Author(s) 2016.

  20. A cavitation transition in the energy landscape of simple cohesive liquids and glasses

    NASA Astrophysics Data System (ADS)

    Altabet, Y. Elia; Stillinger, Frank H.; Debenedetti, Pablo G.

    2016-12-01

    In particle systems with cohesive interactions, the pressure-density relationship of the mechanically stable inherent structures sampled along a liquid isotherm (i.e., the equation of state of an energy landscape) will display a minimum at the Sastry density ρS. The tensile limit at ρS is due to cavitation that occurs upon energy minimization, and previous characterizations of this behavior suggested that ρS is a spinodal-like limit that separates all homogeneous and fractured inherent structures. Here, we revisit the phenomenology of Sastry behavior and find that it is subject to considerable finite-size effects, and the development of the inherent structure equation of state with system size is consistent with the finite-size rounding of an athermal phase transition. What appears to be a continuous spinodal-like point at finite system sizes becomes discontinuous in the thermodynamic limit, indicating behavior akin to a phase transition. We also study cavitation in glassy packings subjected to athermal expansion. Many individual expansion trajectories averaged together produce a smooth equation of state, which we find also exhibits features of finite-size rounding, and the examples studied in this work give rise to a larger limiting tension than for the corresponding landscape equation of state.

  1. Sampling considerations for disease surveillance in wildlife populations

    USGS Publications Warehouse

    Nusser, S.M.; Clark, W.R.; Otis, D.L.; Huang, L.

    2008-01-01

    Disease surveillance in wildlife populations involves detecting the presence of a disease, characterizing its prevalence and spread, and subsequent monitoring. A probability sample of animals selected from the population and corresponding estimators of disease prevalence and detection provide estimates with quantifiable statistical properties, but this approach is rarely used. Although wildlife scientists often assume probability sampling and random disease distributions to calculate sample sizes, convenience samples (i.e., samples of readily available animals) are typically used, and disease distributions are rarely random. We demonstrate how landscape-based simulation can be used to explore properties of estimators from convenience samples in relation to probability samples. We used simulation methods to model what is known about the habitat preferences of the wildlife population, the disease distribution, and the potential biases of the convenience-sample approach. Using chronic wasting disease in free-ranging deer (Odocoileus virginianus) as a simple illustration, we show that using probability sample designs with appropriate estimators provides unbiased surveillance parameter estimates but that the selection bias and coverage errors associated with convenience samples can lead to biased and misleading results. We also suggest practical alternatives to convenience samples that mix probability and convenience sampling. For example, a sample of land areas can be selected using a probability design that oversamples areas with larger animal populations, followed by harvesting of individual animals within sampled areas using a convenience sampling method.

  2. Spatial Variation in Soil Properties among North American Ecosystems and Guidelines for Sampling Designs

    PubMed Central

    Loescher, Henry; Ayres, Edward; Duffy, Paul; Luo, Hongyan; Brunke, Max

    2014-01-01

    Soils are highly variable at many spatial scales, which makes designing studies to accurately estimate the mean value of soil properties across space challenging. The spatial correlation structure is critical to develop robust sampling strategies (e.g., sample size and sample spacing). Current guidelines for designing studies recommend conducting preliminary investigation(s) to characterize this structure, but are rarely followed and sampling designs are often defined by logistics rather than quantitative considerations. The spatial variability of soils was assessed across ∼1 ha at 60 sites. Sites were chosen to represent key US ecosystems as part of a scaling strategy deployed by the National Ecological Observatory Network. We measured soil temperature (Ts) and water content (SWC) because these properties mediate biological/biogeochemical processes below- and above-ground, and quantified spatial variability using semivariograms to estimate spatial correlation. We developed quantitative guidelines to inform sample size and sample spacing for future soil studies, e.g., 20 samples were sufficient to measure Ts to within 10% of the mean with 90% confidence at every temperate and sub-tropical site during the growing season, whereas an order of magnitude more samples were needed to meet this accuracy at some high-latitude sites. SWC was significantly more variable than Ts at most sites, resulting in at least 10× more SWC samples needed to meet the same accuracy requirement. Previous studies investigated the relationship between the mean and variability (i.e., sill) of SWC across space at individual sites across time and have often (but not always) observed the variance or standard deviation peaking at intermediate values of SWC and decreasing at low and high SWC. Finally, we quantified how far apart samples must be spaced to be statistically independent. Semivariance structures from 10 of the 12-dominant soil orders across the US were estimated, advancing our continental-scale understanding of soil behavior. PMID:24465377

  3. Effects of high power ultrasonic vibration on the cold compaction of titanium.

    PubMed

    Fartashvand, Vahid; Abdullah, Amir; Ali Sadough Vanini, Seyed

    2017-05-01

    Titanium has widely been used in chemical and aerospace industries. In order to overcome the drawbacks of cold compaction of titanium, the process was assisted by an ultrasonic vibration system. For this purpose, a uniaxial ultrasonic assisted cold powder compaction system was designed and fabricated. The process variables were powder size, compaction pressure and initial powder compact thickness. Density, friction force, ejection force and spring back of the fabricated samples were measured and studied. The density was observed to improve under the action of ultrasonic vibration. Fine size powders showed better results of consolidation while using ultrasonic vibration. Under the ultrasonic action, it is thought that the friction forces between the die walls and the particles and those friction forces among the powder particles are reduced. Spring back and ejection force didn't considerably change when using ultrasonic vibration. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. [Comparative quality measurements part 3: funnel plots].

    PubMed

    Kottner, Jan; Lahmann, Nils

    2014-02-01

    Comparative quality measurements between organisations or institutions are common. Quality measures need to be standardised and risk adjusted. Random error must also be taken adequately into account. Rankings without consideration of the precision lead to flawed interpretations and enhances "gaming". Application of confidence intervals is one possibility to take chance variation into account. Funnel plots are modified control charts based on Statistical Process Control (SPC) theory. The quality measures are plotted against their sample size. Warning and control limits that are 2 or 3 standard deviations from the center line are added. With increasing group size the precision increases and so the control limits are forming a funnel. Data points within the control limits are considered to show common cause variation; data points outside special cause variation without the focus of spurious rankings. Funnel plots offer data based information about how to evaluate institutional performance within quality management contexts.

  5. The Influence of Framing on Risky Decisions: A Meta-analysis.

    PubMed

    Kühberger

    1998-07-01

    In framing studies, logically equivalent choice situations are differently described and the resulting preferences are studied. A meta-analysis of framing effects is presented for risky choice problems which are framed either as gains or as losses. This evaluates the finding that highlighting the positive aspects of formally identical problems does lead to risk aversion and that highlighting their equivalent negative aspects does lead to risk seeking. Based on a data pool of 136 empirical papers that reported framing experiments with nearly 30,000 participants, we calculated 230 effect sizes. Results show that the overall framing effect between conditions is of small to moderate size and that profound differences exist between research designs. Potentially relevant characteristics were coded for each study. The most important characteristics were whether framing is manipulated by changing reference points or by manipulating outcome salience, and response mode (choice vs. rating/judgment). Further important characteristics were whether options differ qualitatively or quantitatively in risk, whether there is one or multiple risky events, whether framing is manipulated by gain/loss or by task-responsive wording, whether dependent variables are measured between- or within- subjects, and problem domains. Sample (students vs. target populations) and unit of analysis (individual vs. group) was not influential. It is concluded that framing is a reliable phenomenon, but that outcome salience manipulations, which constitute a considerable amount of work, have to be distinguished from reference point manipulations and that procedural features of experimental settings have a considerable effect on effect sizes in framing experiments. Copyright 1998 Academic Press.

  6. Passive Samplers for Investigations of Air Quality: Method Description, Implementation, and Comparison to Alternative Sampling Methods

    EPA Science Inventory

    This Paper covers the basics of passive sampler design, compares passive samplers to conventional methods of air sampling, and discusses considerations when implementing a passive sampling program. The Paper also discusses field sampling and sample analysis considerations to ensu...

  7. Sampling benthic macroinvertebrates in a large flood-plain river: Considerations of study design, sample size, and cost

    USGS Publications Warehouse

    Bartsch, L.A.; Richardson, W.B.; Naimo, T.J.

    1998-01-01

    Estimation of benthic macroinvertebrate populations over large spatial scales is difficult due to the high variability in abundance and the cost of sample processing and taxonomic analysis. To determine a cost-effective, statistically powerful sample design, we conducted an exploratory study of the spatial variation of benthic macroinvertebrates in a 37 km reach of the Upper Mississippi River. We sampled benthos at 36 sites within each of two strata, contiguous backwater and channel border. Three standard ponar (525 cm(2)) grab samples were obtained at each site ('Original Design'). Analysis of variance and sampling cost of strata-wide estimates for abundance of Oligochaeta, Chironomidae, and total invertebrates showed that only one ponar sample per site ('Reduced Design') yielded essentially the same abundance estimates as the Original Design, while reducing the overall cost by 63%. A posteriori statistical power analysis (alpha = 0.05, beta = 0.20) on the Reduced Design estimated that at least 18 sites per stratum were needed to detect differences in mean abundance between contiguous backwater and channel border areas for Oligochaeta, Chironomidae, and total invertebrates. Statistical power was nearly identical for the three taxonomic groups. The abundances of several taxa of concern (e.g., Hexagenia mayflies and Musculium fingernail clams) were too spatially variable to estimate power with our method. Resampling simulations indicated that to achieve adequate sampling precision for Oligochaeta, at least 36 sample sites per stratum would be required, whereas a sampling precision of 0.2 would not be attained with any sample size for Hexagenia in channel border areas, or Chironomidae and Musculium in both strata given the variance structure of the original samples. Community-wide diversity indices (Brillouin and 1-Simpsons) increased as sample area per site increased. The backwater area had higher diversity than the channel border area. The number of sampling sites required to sample benthic macroinvertebrates during our sampling period depended on the study objective and ranged from 18 to more than 40 sites per stratum. No single sampling regime would efficiently and adequately sample all components of the macroinvertebrate community.

  8. The Complex Refractive Index of Volcanic Ash Aerosol Retrieved From Spectral Mass Extinction

    NASA Astrophysics Data System (ADS)

    Reed, Benjamin E.; Peters, Daniel M.; McPheat, Robert; Grainger, R. G.

    2018-01-01

    The complex refractive indices of eight volcanic ash samples, chosen to have a representative range of SiO2 contents, were retrieved from simultaneous measurements of their spectral mass extinction coefficient and size distribution. The mass extinction coefficients, at 0.33-19 μm, were measured using two optical systems: a Fourier transform spectrometer in the infrared and two diffraction grating spectrometers covering visible and ultraviolet wavelengths. The particle size distribution was measured using a scanning mobility particle sizer and an optical particle counter; values for the effective radius of ash particles measured in this study varied from 0.574 to 1.16 μm. Verification retrievals on high-purity silica aerosol demonstrated that the Rayleigh continuous distribution of ellipsoids (CDEs) scattering model significantly outperformed Mie theory in retrieving the complex refractive index, when compared to literature values. Assuming the silica particles provided a good analogue of volcanic ash, the CDE scattering model was applied to retrieve the complex refractive index of the eight ash samples. The Lorentz formulation of the complex refractive index was used within the retrievals as a convenient way to ensure consistency with the Kramers-Kronig relation. The short-wavelength limit of the electric susceptibility was constrained by using independently measured reference values of the complex refractive index of the ash samples at a visible wavelength. The retrieved values of the complex refractive indices of the ash samples showed considerable variation, highlighting the importance of using accurate refractive index data in ash cloud radiative transfer models.

  9. An evaluation of inferential procedures for adaptive clinical trial designs with pre-specified rules for modifying the sample size.

    PubMed

    Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S

    2014-09-01

    Many papers have introduced adaptive clinical trial methods that allow modifications to the sample size based on interim estimates of treatment effect. There has been extensive commentary on type I error control and efficiency considerations, but little research on estimation after an adaptive hypothesis test. We evaluate the reliability and precision of different inferential procedures in the presence of an adaptive design with pre-specified rules for modifying the sampling plan. We extend group sequential orderings of the outcome space based on the stage at stopping, likelihood ratio statistic, and sample mean to the adaptive setting in order to compute median-unbiased point estimates, exact confidence intervals, and P-values uniformly distributed under the null hypothesis. The likelihood ratio ordering is found to average shorter confidence intervals and produce higher probabilities of P-values below important thresholds than alternative approaches. The bias adjusted mean demonstrates the lowest mean squared error among candidate point estimates. A conditional error-based approach in the literature has the benefit of being the only method that accommodates unplanned adaptations. We compare the performance of this and other methods in order to quantify the cost of failing to plan ahead in settings where adaptations could realistically be pre-specified at the design stage. We find the cost to be meaningful for all designs and treatment effects considered, and to be substantial for designs frequently proposed in the literature. © 2014, The International Biometric Society.

  10. ULTRAFINE AEROSOL INFLUENCE ON THE SAMPLING BY CASCADE IMPACTOR.

    PubMed

    Vasyanovich, M; Mostafa, M Y A; Zhukovsky, M

    2017-11-01

    Cascade impactors based on inertial deposition of aerosols are widely used to determine the size distribution of radioactive aerosols. However, there are situations where radioactive aerosols are represented by particles with a diameter of 1-5 nm. In this case, ultrafine aerosols can be deposited on impactor cascades by diffusion mechanism. The influence of ultrafine aerosols (1-5 nm) on the response of three different types of cascade impactors was studied. It was shown that the diffusion deposition of ultrafine aerosols can distort the response of the cascade impactor. The influence of diffusion deposition of ultrafine aerosols can be considerably removed by the use of mesh screens or diffusion battery installed before cascade impactor during the aerosol sampling. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    PubMed

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  12. The Importance and Role of Intracluster Correlations in Planning Cluster Trials

    PubMed Central

    Preisser, John S.; Reboussin, Beth A.; Song, Eun-Young; Wolfson, Mark

    2008-01-01

    There is increasing recognition of the critical role of intracluster correlations of health behavior outcomes in cluster intervention trials. This study examines the estimation, reporting, and use of intracluster correlations in planning cluster trials. We use an estimating equations approach to estimate the intracluster correlations corresponding to the multiple-time-point nested cross-sectional design. Sample size formulae incorporating 2 types of intracluster correlations are examined for the purpose of planning future trials. The traditional intracluster correlation is the correlation among individuals within the same community at a specific time point. A second type is the correlation among individuals within the same community at different time points. For a “time × condition” analysis of a pretest–posttest nested cross-sectional trial design, we show that statistical power considerations based upon a posttest-only design generally are not an adequate substitute for sample size calculations that incorporate both types of intracluster correlations. Estimation, reporting, and use of intracluster correlations are illustrated for several dichotomous measures related to underage drinking collected as part of a large nonrandomized trial to enforce underage drinking laws in the United States from 1998 to 2004. PMID:17879427

  13. A critical look at national monitoring programs for birds and other wildlife species

    USGS Publications Warehouse

    Sauer, J.R.; O'Shea, T.J.; Bogon, M.A.

    2003-01-01

    Concerns?about declines in numerous taxa have created agreat deal of interest in survey development. Because birds have traditionally been monitored by a variety of methods, bird surveys form natural models for development of surveys for other taxa. Here I suggest that most bird surveys are not appropriate models for survey design. Most lack important design components associated with estimation of population parameters at sample sites or with sampling over space, leading to estimates that may be biased, I discuss the limitations of national bird monitoring programs designed to monitor population size. Although these surveys are often analyzed, careful consideration must be given to factors that may bias estimates but that cannot be evaluated within the survey. Bird surveys with appropriate designs have generally been developed as part of management programs that have specific information needs. Experiences gained from bird surveys provide important information for development of surveys for other taxa, and statistical developments in estimation of population sizes from counts provide new approaches to overcoming the limitations evident in many bird surveys. Design of surveys is a collaborative effort, requiring input from biologists, statisticians, and the managers who will use the information from the surveys.

  14. Female reproductive success variation in a Pseudotsuga menziesii seed orchard as revealed by pedigree reconstruction from a bulk seed collection.

    PubMed

    El-Kassaby, Yousry A; Funda, Tomas; Lai, Ben S K

    2010-01-01

    The impact of female reproductive success on the mating system, gene flow, and genetic diversity of the filial generation was studied using a random sample of 801 bulk seed from a 49-clone Pseudotsuga menziesii seed orchard. We used microsatellite DNA fingerprinting and pedigree reconstruction to assign each seed's maternal and paternal parents and directly estimated clonal reproductive success, selfing rate, and the proportion of seed sired by outside pollen sources. Unlike most family array mating system and gene flow studies conducted on natural and experimental populations, which used an equal number of seeds per maternal genotype and thus generating unbiased inferences only on male reproductive success, the random sample we used was a representative of the entire seed crop; therefore, provided a unique opportunity to draw unbiased inferences on both female and male reproductive success variation. Selfing rate and the number of seed sired by outside pollen sources were found to be a function of female fertility variation. This variation also substantially and negatively affected female effective population size. Additionally, the results provided convincing evidence that the use of clone size as a proxy to fertility is questionable and requires further consideration.

  15. Elucidating the ensemble of functionally-relevant transitions in protein systems with a robotics-inspired method

    PubMed Central

    2013-01-01

    Background Many proteins tune their biological function by transitioning between different functional states, effectively acting as dynamic molecular machines. Detailed structural characterization of transition trajectories is central to understanding the relationship between protein dynamics and function. Computational approaches that build on the Molecular Dynamics framework are in principle able to model transition trajectories at great detail but also at considerable computational cost. Methods that delay consideration of dynamics and focus instead on elucidating energetically-credible conformational paths connecting two functionally-relevant structures provide a complementary approach. Effective sampling-based path planning methods originating in robotics have been recently proposed to produce conformational paths. These methods largely model short peptides or address large proteins by simplifying conformational space. Methods We propose a robotics-inspired method that connects two given structures of a protein by sampling conformational paths. The method focuses on small- to medium-size proteins, efficiently modeling structural deformations through the use of the molecular fragment replacement technique. In particular, the method grows a tree in conformational space rooted at the start structure, steering the tree to a goal region defined around the goal structure. We investigate various bias schemes over a progress coordinate for balance between coverage of conformational space and progress towards the goal. A geometric projection layer promotes path diversity. A reactive temperature scheme allows sampling of rare paths that cross energy barriers. Results and conclusions Experiments are conducted on small- to medium-size proteins of length up to 214 amino acids and with multiple known functionally-relevant states, some of which are more than 13Å apart of each-other. Analysis reveals that the method effectively obtains conformational paths connecting structural states that are significantly different. A detailed analysis on the depth and breadth of the tree suggests that a soft global bias over the progress coordinate enhances sampling and results in higher path diversity. The explicit geometric projection layer that biases the exploration away from over-sampled regions further increases coverage, often improving proximity to the goal by forcing the exploration to find new paths. The reactive temperature scheme is shown effective in increasing path diversity, particularly in difficult structural transitions with known high-energy barriers. PMID:24565158

  16. Population variability complicates the accurate detection of climate change responses.

    PubMed

    McCain, Christy; Szewczyk, Tim; Bracy Knight, Kevin

    2016-06-01

    The rush to assess species' responses to anthropogenic climate change (CC) has underestimated the importance of interannual population variability (PV). Researchers assume sampling rigor alone will lead to an accurate detection of response regardless of the underlying population fluctuations of the species under consideration. Using population simulations across a realistic, empirically based gradient in PV, we show that moderate to high PV can lead to opposite and biased conclusions about CC responses. Between pre- and post-CC sampling bouts of modeled populations as in resurvey studies, there is: (i) A 50% probability of erroneously detecting the opposite trend in population abundance change and nearly zero probability of detecting no change. (ii) Across multiple years of sampling, it is nearly impossible to accurately detect any directional shift in population sizes with even moderate PV. (iii) There is up to 50% probability of detecting a population extirpation when the species is present, but in very low natural abundances. (iv) Under scenarios of moderate to high PV across a species' range or at the range edges, there is a bias toward erroneous detection of range shifts or contractions. Essentially, the frequency and magnitude of population peaks and troughs greatly impact the accuracy of our CC response measurements. Species with moderate to high PV (many small vertebrates, invertebrates, and annual plants) may be inaccurate 'canaries in the coal mine' for CC without pertinent demographic analyses and additional repeat sampling. Variation in PV may explain some idiosyncrasies in CC responses detected so far and urgently needs more careful consideration in design and analysis of CC responses. © 2016 John Wiley & Sons Ltd.

  17. Time and size resolved Measurement of Mass Concentration at an Urban Site

    NASA Astrophysics Data System (ADS)

    Karg, E.; Ferron, G. A.; Heyder, J.

    2003-04-01

    Time- and size-resolved measurements of ambient particles are necessary for modelling of atmospheric particle transport, the interpretation of particulate pollution events and the estimation of particle deposition in the human lungs. In the size range 0.01 - 2 µm time- and size-resolved data are obtained from differential mobility and optical particle counter measurements and from gravimetric filter analyses on a daily basis (PM2.5). By comparison of the time averaged and size integrated particle volume concentration with PM2.5 data, an average density of ambient particles can be estimated. Using this density, the number concentration data can be converted in time- and size-resolved mass concentration. Such measurements were carried out at a Munich downtown crossroads. The spectra were integrated in the size ranges 10 - 100 nm, 100 - 500 nm and 500 - 2000 nm. Particles in these ranges are named ultrafine, fine and coarse particles. These ranges roughly represent freshly emitted particles, aged/accumulated particles and particles entrained by erosive processes. An average number concentration of 80000 1/cm3 (s.d. 67%), a particle volume concentration of 53 µm3/cm3 (s.d. 76%) and a PM2.5 mass concentration of 27 µg/m3 was found. These particle volume- and PM2.5 data imply an average density of 0.51 g/cm3. Average number concentration showed 95.3%, 4.7% and 0.006% of the total particle concentration in the size ranges mentioned above. Mass concentration was 14.7%, 80.2% and 5.1% of the total, assuming the average density to be valid for all particles. The variability in mass concentration was 94%, 75% and 33% for the three size ranges. Nearly all ambient particles were in the ultrafine size range, whereas most of the mass concentration was in the fine size range. However, a considerable mass fraction of nearly 15% was found in the ultrafine size range. As the sampling site was close to the road and traffic emissions were the major source of the particles, 1) the density was very low due to agglomerated and porous structures of freshly emitted combustion particles and 2) the variability was highest in the ultrafine range, obviously correlated to traffic activity and lowest in the micron size range. In conclusion, almost all ambient particles were ultrafine particles, whereas most of the particle mass was associated with fine particles. Nevertheless, a considerable mass fraction was found in the ultrafine size range. These particles had a very low density so that they can be considered as agglomerated and porous particles emitted from vehicles passing the crossroads. Therefore they showed a much higher variation in mass concentration than the fine and coarse particles.

  18. Natural fracture systems on planetary surfaces: Genetic classification and pattern randomness

    NASA Technical Reports Server (NTRS)

    Rossbacher, Lisa A.

    1987-01-01

    One method for classifying natural fracture systems is by fracture genesis. This approach involves the physics of the formation process, and it has been used most frequently in attempts to predict subsurface fractures and petroleum reservoir productivity. This classification system can also be applied to larger fracture systems on any planetary surface. One problem in applying this classification system to planetary surfaces is that it was developed for ralatively small-scale fractures that would influence porosity, particularly as observed in a core sample. Planetary studies also require consideration of large-scale fractures. Nevertheless, this system offers some valuable perspectives on fracture systems of any size.

  19. Hierarchical distance-sampling models to estimate population size and habitat-specific abundance of an island endemic

    USGS Publications Warehouse

    Sillett, Scott T.; Chandler, Richard B.; Royle, J. Andrew; Kéry, Marc; Morrison, Scott A.

    2012-01-01

    Population size and habitat-specific abundance estimates are essential for conservation management. A major impediment to obtaining such estimates is that few statistical models are able to simultaneously account for both spatial variation in abundance and heterogeneity in detection probability, and still be amenable to large-scale applications. The hierarchical distance-sampling model of J. A. Royle, D. K. Dawson, and S. Bates provides a practical solution. Here, we extend this model to estimate habitat-specific abundance and rangewide population size of a bird species of management concern, the Island Scrub-Jay (Aphelocoma insularis), which occurs solely on Santa Cruz Island, California, USA. We surveyed 307 randomly selected, 300 m diameter, point locations throughout the 250-km2 island during October 2008 and April 2009. Population size was estimated to be 2267 (95% CI 1613-3007) and 1705 (1212-2369) during the fall and spring respectively, considerably lower than a previously published but statistically problematic estimate of 12 500. This large discrepancy emphasizes the importance of proper survey design and analysis for obtaining reliable information for management decisions. Jays were most abundant in low-elevation chaparral habitat; the detection function depended primarily on the percent cover of chaparral and forest within count circles. Vegetation change on the island has been dramatic in recent decades, due to release from herbivory following the eradication of feral sheep (Ovis aries) from the majority of the island in the mid-1980s. We applied best-fit fall and spring models of habitat-specific jay abundance to a vegetation map from 1985, and estimated the population size of A. insularis was 1400-1500 at that time. The 20-30% increase in the jay population suggests that the species has benefited from the recovery of native vegetation since sheep removal. Nevertheless, this jay's tiny range and small population size make it vulnerable to natural disasters and to habitat alteration related to climate change. Our results demonstrate that hierarchical distance-sampling models hold promise for estimating population size and spatial density variation at large scales. Our statistical methods have been incorporated into the R package unmarked to facilitate their use by animal ecologists, and we provide annotated code in the Supplement.

  20. Impact of Gd3+/graphene substitution on the physical properties of magnesium ferrite nanocomposites

    NASA Astrophysics Data System (ADS)

    Ateia, Ebtesam E.; Mohamed, Amira T.; Elsayed, Kareem

    2018-04-01

    Magnesium nano ferrite with composition MgFe2O4, MgGd0.05Fe1.95O4 and MgFe2O4 - 5 wt% GO was synthesized using a citrate auto-combustion method. The crystal structure, morphology, and magnetic properties of the investigated samples were studied. High Resolution Transmission Electron Microscopy (HRTEM) images show that the substitution of small amounts of Gd3+/GO causes a considerable reduction of the grain size. Studies on the magnetic properties demonstrate that the coercivity of GO-substituted magnesium nano ferrites is enhanced from 72 Oe to 203 Oe and the magnetocrystalline anisotropy constant increases from 1171 to 3425 emu Oe/gm at 300 K. The direct effects of graphene on morphology, crystal structure as well as the magnetic properties reveal that the studied sample are suitable for turbidity color and removal. The magnetic entropy change is estimated from magnetization data using Maxwell relation. The calculated Curie temperature from the Curie-Weiss law and the maximum entropy change are in good agreement with each other. Based on UV diffuse reflectance spectroscopy studies, the optical band gaps are in the range of 1.4-2.15 eV. In addition, the combination of small particle size and good magnetic properties makes the investigated samples act as a potential candidates for superior catalysts, adsorbents, and electromagnetic wave absorbers.

  1. Does body satisfaction influence self-esteem in adolescents' daily lives? An experience sampling study.

    PubMed

    Fuller-Tyszkiewicz, Matthew; McCabe, Marita; Skouteris, Helen; Richardson, Ben; Nihill, Kristy; Watson, Brittany; Solomon, Daniel

    2015-12-01

    This study examined, within the context of the Contingencies of Self-Worth model, state-based associations between self-esteem and body satisfaction using the experience sampling method. One hundred and forty-four adolescent girls (mean age = 14.28 years) completed up to 6 assessments per day for one week using Palm Digital Assistants, in addition to baseline measures of trait body satisfaction and self-esteem. Results showed considerable variation in both state-based constructs within days, and evidence of effects of body satisfaction on self-esteem, but not vice versa. Although these state-based associations were small in size and weakened as the time lag between assessments increased for the sample as a whole, individual differences in the magnitude of these effects were observed and predicted by trait self-esteem and body satisfaction. Collectively, these findings offer support for key tenets of the Contingencies of Self-Worth model. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  2. Geochemistry of CI chondrites: Major and trace elements, and Cu and Zn Isotopes

    NASA Astrophysics Data System (ADS)

    Barrat, J. A.; Zanda, B.; Moynier, F.; Bollinger, C.; Liorzou, C.; Bayon, G.

    2012-04-01

    In order to check the heterogeneity of the CI chondrites and determine the average composition of this group of meteorites, we analyzed a series of six large chips (weighing between 0.6 and 1.2 g) of Orgueil prepared from five different stones. In addition, one sample from each of Ivuna and Alais was analyzed. Although the sizes of the chips used in this study were “large”, our results show evidence for minor chemical heterogeneity in Orgueil, particularly for alkali elements and U. After removal of one outlier sample, the spread of the results is considerably reduced. For most of the 46 elements analyzed in this study, the average composition calculated for Orgueil is in very good agreement with previous CI estimates. This average, obtained with a “large” mass of samples, is analytically homogeneous and is suitable for normalization purposes. Finally, the Cu and Zn isotopic ratios are homogeneously distributed within the CI parent body with a spread of less than 100 ppm per atomic mass unit (amu).

  3. Phase behaviour of oat β-glucan/sodium caseinate mixtures varying in molecular weight.

    PubMed

    Agbenorhevi, Jacob K; Kontogiorgos, Vassilis; Kasapis, Stefan

    2013-05-01

    The isothermal phase behaviour at 5 °C of mixtures of sodium caseinate and oat β-glucan isolates varying in molecular weight (MW) was investigated by means of phase diagram construction, rheometry, fluorescence microscopy and electrophoresis. Phase diagrams indicated that the compatibility of the β-glucan/sodium caseinate system increases as β-glucan MW decreases. Images of mixtures taken at various biopolymer concentrations revealed phase separated domains. Results also revealed that at the state of thermodynamic equilibrium, lower MW samples yielded considerable viscosity in the mixture. At equivalent hydrodynamic volume of β-glucan in the mixtures, samples varying in molecular weight exhibited similar flow behaviour. A deviation dependent on the protein concentration was observed for the high MW sample in the concentrated regime due to the size of β-glucan aggregates formed. Results demonstrate that by controlling the structural features of β-glucan in mixtures with sodium caseinate, informed manipulation of rheological properties in these systems can be achieved. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Studies of cosmogenic in-situ 14CO and 14CO2 produced in terrestrial and extraterrestrial samples: experimental procedures and applications

    NASA Astrophysics Data System (ADS)

    Lal, D.; Jull, A. J. T.

    1994-06-01

    We have developed an experimental procedure for quantitative extraction of cosmogenic in-situ 14C produced in terrestrial and extraterrestrial samples, in the two chemical forms 14CO and 14CO2 in which it is found to be present in these samples. The technique is based on wet digestion of the sample in vacuo with hydrofluoric acid at 60-80°C in a Kel-F® vessel. Kel-F is a homo-polymer (chlortrifluorethylene). The procedures and the digestion vessel sizes used allow convenient extraction of 14C activity from samples of 50 mg to 50 g weight. Procedure blanks were reduced considerably by the experience gained with the system, and can be reduced further. We determined that most of the in-situ 14C activity was present in the CO phase (> 60%) in the case of both terrestrial quartz and in bulk samples of meteorites, analogous to the case of in-situ production of 14C in ice. Some results of measurements of 14C activities in meteorites and in terrestrial samples are presented. The latter include several samples which have been studied earlier for in-situ 10Be (and 26Al) concentrations, and allow us to determine relative 14C and 10Be production rates in quartz.

  5. Development of a fibre size-specific job-exposure matrix for airborne asbestos fibres.

    PubMed

    Dement, J M; Kuempel, E D; Zumwalde, R D; Smith, R J; Stayner, L T; Loomis, D

    2008-09-01

    To develop a method for estimating fibre size-specific exposures to airborne asbestos dust for use in epidemiological investigations of exposure-response relations. Archived membrane filter samples collected at a Charleston, South Carolina asbestos textile plant during 1964-8 were analysed by transmission electron microscopy (TEM) to determine the bivariate diameter/length distribution of airborne fibres by plant operation. The protocol used for these analyses was based on the direct transfer method published by the International Standards Organization (ISO), modified to enhance fibre size determinations, especially for long fibres. Procedures to adjust standard phase contrast microscopy (PCM) fibre concentration measures using the TEM data in a job-exposure matrix (JEM) were developed in order to estimate fibre size-specific exposures. A total of 84 airborne dust samples were used to measure diameter and length for over 18,000 fibres or fibre bundles. Consistent with previous studies, a small proportion of airborne fibres were longer than >5 microm in length, but the proportion varied considerably by plant operation (range 6.9% to 20.8%). The bivariate diameter/length distribution of airborne fibres was expressed as the proportion of fibres in 20 size-specific cells and this distribution demonstrated a relatively high degree of variability by plant operation. PCM adjustment factors also varied substantially across plant operations. These data provide new information concerning the airborne fibre characteristics for a previously studied textile facility. The TEM data demonstrate that the vast majority of airborne fibres inhaled by the workers were shorter than 5 mum in length, and thus not included in the PCM-based fibre counts. The TEM data were used to develop a new fibre size-specific JEM for use in an updated cohort mortality study to investigate the role of fibre dimension in the development of asbestos-related lung diseases.

  6. Trophic interactions of common elasmobranchs in deep-sea communities of the Gulf of Mexico revealed through stable isotope and stomach content analysis

    NASA Astrophysics Data System (ADS)

    Churchill, Diana A.; Heithaus, Michael R.; Vaudo, Jeremy J.; Grubbs, R. Dean; Gastrich, Kirk; Castro, José I.

    2015-05-01

    Deep-water sharks are abundant and widely distributed in the northern and eastern Gulf of Mexico. As mid- and upper-level consumers that can range widely, sharks likely are important components of deep-sea communities and their trophic interactions may serve as system-wide baselines that could be used to monitor the overall health of these communities. We investigated the trophic interactions of deep-sea sharks using a combination of stable isotope (δ13C and δ15N) and stomach content analyses. Two hundred thirty-two muscle samples were collected from elasmobranchs captured off the bottom at depths between 200 and 1100 m along the northern slope (NGS) and the west Florida slope (WFS) of the Gulf of Mexico during 2011 and 2012. Although we detected some spatial, temporal, and interspecific variation in apparent trophic positions based on stable isotopes, there was considerable isotopic overlap among species, between locations, and through time. Overall δ15N values in the NGS region were higher than in the WFS. The δ15N values also increased between April 2011 and 2012 in the NGS, but not the WFS, within Squalus cf. mitsukurii. We found that stable isotope values of S. cf. mitsukurii, the most commonly captured elasmobranch, varied between sample regions, through time, and also with sex and size. Stomach content analysis (n=105) suggested relatively similar diets at the level of broad taxonomic categories of prey among the taxa with sufficient sample sizes. We did not detect a relationship between body size and relative trophic levels inferred from δ15N, but patterns within several species suggest increasing trophic levels with increasing size. Both δ13C and δ15N values suggest a substantial degree of overlap among most deep-water shark species. This study provides the first characterization of the trophic interactions of deep-sea sharks in the Gulf of Mexico and establishes system baselines for future investigations.

  7. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    PubMed

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  8. Effects of sources of variability on sample sizes required for RCTs, applied to trials of lipid-altering therapies on carotid artery intima-media thickness.

    PubMed

    Gould, A Lawrence; Koglin, Joerg; Bain, Raymond P; Pinto, Cathy-Anne; Mitchel, Yale B; Pasternak, Richard C; Sapre, Aditi

    2009-08-01

    Studies measuring progression of carotid artery intima-media thickness (cIMT) have been used to estimate the effect of lipid-modifying therapies cardiovascular event risk. The likelihood that future cIMT clinical trials will detect a true treatment effect is estimated by leveraging results from prior studies. The present analyses assess the impact of between- and within-study variability based on currently published data from prior clinical studies on the likelihood that ongoing or future cIMT trials will detect the true treatment effect of lipid-modifying therapies. Published data from six contemporary cIMT studies (ASAP, ARBITER 2, RADIANCE 1, RADIANCE 2, ENHANCE, and METEOR) including data from a total of 3563 patients were examined. Bayesian and frequentist methods were used to assess the impact of between study variability on the likelihood of detecting true treatment effects on 1-year cIMT progression/regression and to provide a sample size estimate that would specifically compensate for the effect of between-study variability. In addition to the well-described within-study variability, there is considerable between-study variability associated with the measurement of annualized change in cIMT. Accounting for the additional between-study variability decreases the power for existing study designs. In order to account for the added between-study variability, it is likely that future cIMT studies would require a large increase in sample size in order to provide substantial probability (> or =90%) to have 90% power of detecting a true treatment effect.Limitation Analyses are based on study level data. Future meta-analyses incorporating patient-level data would be useful for confirmation. Due to substantial within- and between-study variability in the measure of 1-year change of cIMT, as well as uncertainty about progression rates in contemporary populations, future study designs evaluating the effect of new lipid-modifying therapies on atherosclerotic disease progression are likely to be challenged by large sample sizes in order to demonstrate a true treatment effect.

  9. Intraspecific variability in the life histories of endemic coral-reef fishes between photic and mesophotic depths across the Central Pacific Ocean

    NASA Astrophysics Data System (ADS)

    Winston, M. S.; Taylor, B. M.; Franklin, E. C.

    2017-06-01

    Mesophotic coral ecosystems (MCEs) represent the lowest depth distribution inhabited by many coral reef-associated organisms. Research on fishes associated with MCEs is sparse, leading to a critical lack of knowledge of how reef fish found at mesophotic depths may vary from their shallow reef conspecifics. We investigated intraspecific variability in body condition and growth of three Hawaiian endemics collected from shallow, photic reefs (5-33 m deep) and MCEs (40-75 m) throughout the Hawaiian Archipelago and Johnston Atoll: the detritivorous goldring surgeonfish, Ctenochaetus strigosus, and the planktivorous threespot chromis, Chromis verater, and Hawaiian dascyllus, Dascyllus albisella. Estimates of body condition and size-at-age varied between shallow and mesophotic depths; however, these demographic differences were outweighed by the magnitude of variability found across the latitudinal gradient of locations sampled within the Central Pacific. Body condition and maximum body size were lowest in samples collected from shallow and mesophotic Johnston Atoll sites, with no difference occurring between depths. Samples from the Northwestern Hawaiian Islands tended to have the highest body condition and reached the largest body sizes, with differences between shallow and mesophotic sites highly variable among species. The findings of this study support newly emerging research demonstrating intraspecific variability in the life history of coral-reef fish species whose distributions span shallow and mesophotic reefs. This suggests not only that the conservation and fisheries management should take into consideration differences in the life histories of reef-fish populations across spatial scales, but also that information derived from studies of shallow fishes be applied with caution to conspecific populations in mesophotic coral environments.

  10. Will Big Data Close the Missing Heritability Gap?

    PubMed

    Kim, Hwasoon; Grueneberg, Alexander; Vazquez, Ana I; Hsu, Stephen; de Los Campos, Gustavo

    2017-11-01

    Despite the important discoveries reported by genome-wide association (GWA) studies, for most traits and diseases the prediction R-squared (R-sq.) achieved with genetic scores remains considerably lower than the trait heritability. Modern biobanks will soon deliver unprecedentedly large biomedical data sets: Will the advent of big data close the gap between the trait heritability and the proportion of variance that can be explained by a genomic predictor? We addressed this question using Bayesian methods and a data analysis approach that produces a surface response relating prediction R-sq. with sample size and model complexity ( e.g. , number of SNPs). We applied the methodology to data from the interim release of the UK Biobank. Focusing on human height as a model trait and using 80,000 records for model training, we achieved a prediction R-sq. in testing ( n = 22,221) of 0.24 (95% C.I.: 0.23-0.25). Our estimates show that prediction R-sq. increases with sample size, reaching an estimated plateau at values that ranged from 0.1 to 0.37 for models using 500 and 50,000 (GWA-selected) SNPs, respectively. Soon much larger data sets will become available. Using the estimated surface response, we forecast that larger sample sizes will lead to further improvements in prediction R-sq. We conclude that big data will lead to a substantial reduction of the gap between trait heritability and the proportion of interindividual differences that can be explained with a genomic predictor. However, even with the power of big data, for complex traits we anticipate that the gap between prediction R-sq. and trait heritability will not be fully closed. Copyright © 2017 by the Genetics Society of America.

  11. Will Big Data Close the Missing Heritability Gap?

    PubMed Central

    Kim, Hwasoon; Grueneberg, Alexander; Vazquez, Ana I.; Hsu, Stephen; de los Campos, Gustavo

    2017-01-01

    Despite the important discoveries reported by genome-wide association (GWA) studies, for most traits and diseases the prediction R-squared (R-sq.) achieved with genetic scores remains considerably lower than the trait heritability. Modern biobanks will soon deliver unprecedentedly large biomedical data sets: Will the advent of big data close the gap between the trait heritability and the proportion of variance that can be explained by a genomic predictor? We addressed this question using Bayesian methods and a data analysis approach that produces a surface response relating prediction R-sq. with sample size and model complexity (e.g., number of SNPs). We applied the methodology to data from the interim release of the UK Biobank. Focusing on human height as a model trait and using 80,000 records for model training, we achieved a prediction R-sq. in testing (n = 22,221) of 0.24 (95% C.I.: 0.23–0.25). Our estimates show that prediction R-sq. increases with sample size, reaching an estimated plateau at values that ranged from 0.1 to 0.37 for models using 500 and 50,000 (GWA-selected) SNPs, respectively. Soon much larger data sets will become available. Using the estimated surface response, we forecast that larger sample sizes will lead to further improvements in prediction R-sq. We conclude that big data will lead to a substantial reduction of the gap between trait heritability and the proportion of interindividual differences that can be explained with a genomic predictor. However, even with the power of big data, for complex traits we anticipate that the gap between prediction R-sq. and trait heritability will not be fully closed. PMID:28893854

  12. 46 CFR 160.054-2 - Type and size.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Kits, First-Aid, for Inflatable Liferafts § 160.054-2 Type and size. (a) Type. First-aid kits covered by this specification shall be of the water-tight type... special consideration. (b) Size. First-aid kits shall be of a size adequate for packing 12 standard single...

  13. 46 CFR 160.054-2 - Type and size.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Kits, First-Aid, for Inflatable Liferafts § 160.054-2 Type and size. (a) Type. First-aid kits covered by this specification shall be of the water-tight type... special consideration. (b) Size. First-aid kits shall be of a size adequate for packing 12 standard single...

  14. 46 CFR 160.054-2 - Type and size.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Kits, First-Aid, for Inflatable Liferafts § 160.054-2 Type and size. (a) Type. First-aid kits covered by this specification shall be of the water-tight type... special consideration. (b) Size. First-aid kits shall be of a size adequate for packing 12 standard single...

  15. 46 CFR 160.054-2 - Type and size.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Kits, First-Aid, for Inflatable Liferafts § 160.054-2 Type and size. (a) Type. First-aid kits covered by this specification shall be of the water-tight type... special consideration. (b) Size. First-aid kits shall be of a size adequate for packing 12 standard single...

  16. 46 CFR 160.054-2 - Type and size.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Kits, First-Aid, for Inflatable Liferafts § 160.054-2 Type and size. (a) Type. First-aid kits covered by this specification shall be of the water-tight type... special consideration. (b) Size. First-aid kits shall be of a size adequate for packing 12 standard single...

  17. Femoral stem size mismatch in women undergoing total hip arthroplasty.

    PubMed

    Dundon, John M; Felberbaum, Dvorah Leah; Long, William J

    2018-06-01

    Total hip arthroplasty (THA) is a highly successful surgery with a high prevalence in women. Women have been noted to have smaller proximal femoral anatomy and decreased bone strength compared to males. The goal of our study was to define the size discrepancy in femoral stem implants between men and women using a metaphyseal fitting single taper stem. We retrospectively reviewed the THA's performed by a single surgeon over the previous two years. Data was extracted from operative reports regarding stem size, neck length and offset, and conversion to a different type of stem. This data was reviewed with confidence intervals and a t -test was performed for independent samples with a p < 0.05 being determined significant. We analyzed the data from 276 THA's performed (129 in men, and 147 in women). Women were noted to be associated with smaller stem sizes compared to men (37.67% in women, 6.11% in men), with 7.48% of women requiring conversion to a different type of implant. There was a significant difference in mean stem size (9.21 in men, 6.70 in women, p < 0.0001). Women also required reduced neck options significantly more often than men (38.97% in women, 9.29% in men, p < 0.0001). Review of femoral stem sizes reveals that current femoral stem sizing may not appropriately account for women and alternative stem options should be available if using a metaphyseal fitting single tapered stems. Future consideration should be given to more anatomic female sized femoral stems or alternative options should be available.

  18. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    PubMed Central

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  19. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

    PubMed

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

  20. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    PubMed

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  1. Mixed nano/micro-sized calcium phosphate composite and EDTA root surface etching improve availability of graft material in intrabony defects: an in vivo scanning electron microscopy evaluation.

    PubMed

    Gamal, Ahmed Y; Iacono, Vincent J

    2013-12-01

    The use of nanoparticles of graft materials may lead to breakthrough applications for periodontal regeneration. However, due to their small particle size, nanoparticles may be eliminated from periodontal defects by phagocytosis. In an attempt to improve nanoparticle retention in periodontal defects, the present in vivo study uses scanning electron microscopy (SEM) to evaluate the potential of micrograft particles of β-tricalcium phosphate (β-TCP) to enhance the binding and retention of nanoparticles of hydroxyapatite (nHA) on EDTA-treated and non-treated root surfaces in periodontal defects after 14 days of healing. Sixty patients having at least two hopeless periodontally affected teeth designated for extraction were randomly divided into four treatment groups (15 patients per group). Patients in group 1 had selected periodontal intrabony defects grafted with nHA of particle size 10 to 100 nm. Patients in group 2 were treated in a similar manner but had the affected roots etched for 2 minutes with a neutral 24% EDTA gel before grafting of the associated vertical defects with nHA. Patients in group 3 had the selected intrabony defects grafted with a composite graft consisting of equal volumes of nHA and β-TCP (particle size 63 to 150 nm). Patients in group 4 were treated as in group 3 but the affected roots were etched with neutral 24% EDTA as in group 2. For each of the four groups, one tooth was extracted immediately, and the second tooth was extracted after 14 days of healing for SEM evaluation. Fourteen days after surgery, all group 1 samples were devoid of any nanoparticles adherent to the root surfaces. Group 2 showed root surface areas 44.7% covered by a single layer of clot-blended grafted particles 14 days following graft application. After 14 days, group 3 samples appeared to retain fibrin strands devoid of grafted particles. Immediately extracted root samples of group 4 had adherent graft particles that covered a considerable area of the root surfaces (88.6%). Grafted particles appeared to cover all samples in a multilayered pattern. After 14 days, the group 4 extracted samples showed multilayered fibrin-covered nano/micro-sized graft particles adherent to the root surfaces (78.5%). The use of a composite graft consisting of nHA and microsized β-TCP after root surface treatment with 24% EDTA may be a suitable method to improve nHA retention in periodontal defects with subsequent graft bioreactivity.

  2. Surface degassing and modifications to vesicle size distributions in active basalt flows

    USGS Publications Warehouse

    Cashman, K.V.; Mangan, M.T.; Newman, S.

    1994-01-01

    The character of the vesicle population in lava flows includes several measurable parameters that may provide important constraints on lava flow dynamics and rheology. Interpretation of vesicle size distributions (VSDs), however, requires an understanding of vesiculation processes in feeder conduits, and of post-eruption modifications to VSDs during transport and emplacement. To this end we collected samples from active basalt flows at Kilauea Volcano: (1) near the effusive Kupaianaha vent; (2) through skylights in the approximately isothermal Wahaula and Kamoamoa tube systems transporting lava to the coast; (3) from surface breakouts at different locations along the lava tubes; and (4) from different locations in a single breakout from a lava tube 1 km from the 51 vent at Pu'u 'O'o. Near-vent samples are characterized by VSDs that show exponentially decreasing numbers of vesicles with increasing vesicle size. These size distributions suggest that nucleation and growth of bubbles were continuous during ascent in the conduit, with minor associated bubble coalescence resulting from differential bubble rise. The entire vesicle population can be attributed to shallow exsolution of H2O-dominated gases at rates consistent with those predicted by simple diffusion models. Measurements of H2O, CO2 and S in the matrix glass show that the melt equilibrated rapidly at atmospheric pressure. Down-tube samples maintain similar VSD forms but show a progressive decrease in both overall vesicularity and mean vesicle size. We attribute this change to open system, "passive" rise and escape of larger bubbles to the surface. Such gas loss from the tube system results in the output of 1.2 ?? 106 g/day SO2, an output representing an addition of approximately 1% to overall volatile budget calculations. A steady increase in bubble number density with downstream distance is best explained by continued bubble nucleation at rates of 7-8/cm3s. Rates are ???25% of those estimated from the vent samples, and thus represent volatile supersaturations considerably less than those of the conduit. We note also that the small total volume represented by this new bubble population does not: (1) measurably deplete the melt in volatiles; or (2) make up for the overall vesicularity decrease resulting from the loss of larger bubbles. Surface breakout samples have distinctive VSDs characterized by an extreme depletion in the small vesicle population. This results in samples with much lower number densities and larger mean vesicle sizes than corresponding tube samples. Similar VSD patterns have been observed in solidified lava flows and are interpreted to result from either static (wall rupture) or dynamic (bubble rise and capture) coalescence. Through comparison with vent and tube vesicle populations, we suggest that, in addition to coalescence, the observed vesicle populations in the breakout samples have experienced a rapid loss of small vesicles consistent with 'ripening' of the VSD resulting from interbubble diffusion of volatiles. Confinement of ripening features to surface flows suggests that the thin skin that forms on surface breakouts may play a role in the observed VSD modification. ?? 1994.

  3. Size matters: relationships between body size and body mass of common coastal, aquatic invertebrates in the Baltic Sea

    PubMed Central

    Austin, Åsa; Bergström, Ulf; Donadi, Serena; Eriksson, Britas D.H.K.; Hansen, Joakim; Sundblad, Göran

    2017-01-01

    Background Organism biomass is one of the most important variables in ecological studies, making biomass estimations one of the most common laboratory tasks. Biomass of small macroinvertebrates is usually estimated as dry mass or ash-free dry mass (hereafter ‘DM’ vs. ‘AFDM’) per sample; a laborious and time consuming process, that often can be speeded up using easily measured and reliable proxy variables like body size or wet (fresh) mass. Another common way of estimating AFDM (one of the most accurate but also time-consuming estimates of biologically active tissue mass) is the use of AFDM/DM ratios as conversion factors. So far, however, these ratios typically ignore the possibility that the relative mass of biologically active vs. non-active support tissue (e.g., protective exoskeleton or shell)—and therefore, also AFDM/DM ratios—may change with body size, as previously shown for taxa like spiders, vertebrates and trees. Methods We collected aquatic, epibenthic macroinvertebrates (>1 mm) in 32 shallow bays along a 360 km stretch of the Swedish coast along the Baltic Sea; one of the largest brackish water bodies on Earth. We then estimated statistical relationships between the body size (length or height in mm), body dry mass and ash-free dry mass for 14 of the most common taxa; five gastropods, three bivalves, three crustaceans and three insect larvae. Finally, we statistically estimated the potential influence of body size on the AFDM/DM ratio per taxon. Results For most taxa, non-linear regression models describing the power relationship between body size and (i) DM and (ii) AFDM fit the data well (as indicated by low SE and high R2). Moreover, for more than half of the taxa studied (including the vast majority of the shelled molluscs), body size had a negative influence on organism AFDM/DM ratios. Discussion The good fit of the modelled power relationships suggests that the constants reported here can be used to quickly estimate organism dry- and ash-free dry mass based on body size, thereby freeing up considerable work resources. However, the considerable differences in constants between taxa emphasize the need for taxon-specific relationships, and the potential dangers associated with ignoring body size. The negative influence of body size on the AFDM/DM ratio found in a majority of the molluscs could be caused by increasingly thicker shells with organism age, and/or spawning-induced loss of biologically active tissue in adults. Consequently, future studies utilizing AFDM/DM (and presumably also AFDM/wet mass) ratios should carefully assess the potential influence of body size to ensure more reliable estimates of organism body mass. PMID:28149685

  4. Size matters: relationships between body size and body mass of common coastal, aquatic invertebrates in the Baltic Sea.

    PubMed

    Eklöf, Johan; Austin, Åsa; Bergström, Ulf; Donadi, Serena; Eriksson, Britas D H K; Hansen, Joakim; Sundblad, Göran

    2017-01-01

    Organism biomass is one of the most important variables in ecological studies, making biomass estimations one of the most common laboratory tasks. Biomass of small macroinvertebrates is usually estimated as dry mass or ash-free dry mass (hereafter 'DM' vs. 'AFDM') per sample; a laborious and time consuming process, that often can be speeded up using easily measured and reliable proxy variables like body size or wet (fresh) mass. Another common way of estimating AFDM (one of the most accurate but also time-consuming estimates of biologically active tissue mass) is the use of AFDM/DM ratios as conversion factors. So far, however, these ratios typically ignore the possibility that the relative mass of biologically active vs. non-active support tissue (e.g., protective exoskeleton or shell)-and therefore, also AFDM/DM ratios-may change with body size, as previously shown for taxa like spiders, vertebrates and trees. We collected aquatic, epibenthic macroinvertebrates (>1 mm) in 32 shallow bays along a 360 km stretch of the Swedish coast along the Baltic Sea; one of the largest brackish water bodies on Earth. We then estimated statistical relationships between the body size (length or height in mm), body dry mass and ash-free dry mass for 14 of the most common taxa; five gastropods, three bivalves, three crustaceans and three insect larvae. Finally, we statistically estimated the potential influence of body size on the AFDM/DM ratio per taxon. For most taxa, non-linear regression models describing the power relationship between body size and (i) DM and (ii) AFDM fit the data well (as indicated by low SE and high R 2 ). Moreover, for more than half of the taxa studied (including the vast majority of the shelled molluscs), body size had a negative influence on organism AFDM/DM ratios. The good fit of the modelled power relationships suggests that the constants reported here can be used to quickly estimate organism dry- and ash-free dry mass based on body size, thereby freeing up considerable work resources. However, the considerable differences in constants between taxa emphasize the need for taxon-specific relationships, and the potential dangers associated with ignoring body size. The negative influence of body size on the AFDM/DM ratio found in a majority of the molluscs could be caused by increasingly thicker shells with organism age, and/or spawning-induced loss of biologically active tissue in adults. Consequently, future studies utilizing AFDM/DM (and presumably also AFDM/wet mass) ratios should carefully assess the potential influence of body size to ensure more reliable estimates of organism body mass.

  5. Preparation of highly multiplexed small RNA sequencing libraries.

    PubMed

    Persson, Helena; Søkilde, Rolf; Pirona, Anna Chiara; Rovira, Carlos

    2017-08-01

    MicroRNAs (miRNAs) are ~22-nucleotide-long small non-coding RNAs that regulate the expression of protein-coding genes by base pairing to partially complementary target sites, preferentially located in the 3´ untranslated region (UTR) of target mRNAs. The expression and function of miRNAs have been extensively studied in human disease, as well as the possibility of using these molecules as biomarkers for prognostication and treatment guidance. To identify and validate miRNAs as biomarkers, their expression must be screened in large collections of patient samples. Here, we develop a scalable protocol for the rapid and economical preparation of a large number of small RNA sequencing libraries using dual indexing for multiplexing. Combined with the use of off-the-shelf reagents, more samples can be sequenced simultaneously on large-scale sequencing platforms at a considerably lower cost per sample. Sample preparation is simplified by pooling libraries prior to gel purification, which allows for the selection of a narrow size range while minimizing sample variation. A comparison with publicly available data from benchmarking of miRNA analysis platforms showed that this method captures absolute and differential expression as effectively as commercially available alternatives.

  6. Toward Monitoring Parkinson's Through Analysis of Static Handwriting Samples: A Quantitative Analytical Framework.

    PubMed

    Zhi, Naiqian; Jaeger, Beverly Kris; Gouldstone, Andrew; Sipahi, Rifat; Frank, Samuel

    2017-03-01

    Detection of changes in micrographia as a manifestation of symptomatic progression or therapeutic response in Parkinson's disease (PD) is challenging as such changes can be subtle. A computerized toolkit based on quantitative analysis of handwriting samples would be valuable as it could supplement and support clinical assessments, help monitor micrographia, and link it to PD. Such a toolkit would be especially useful if it could detect subtle yet relevant changes in handwriting morphology, thus enhancing resolution of the detection procedure. This would be made possible by developing a set of metrics sensitive enough to detect and discern micrographia with specificity. Several metrics that are sensitive to the characteristics of micrographia were developed, with minimal sensitivity to confounding handwriting artifacts. These metrics capture character size-reduction, ink utilization, and pixel density within a writing sample from left to right. They are used here to "score" handwritten signatures of 12 different individuals corresponding to healthy and symptomatic PD conditions, and sample control signatures that had been artificially reduced in size for comparison purposes. Moreover, metric analyses of samples from ten of the 12 individuals for which clinical diagnosis time is known show considerable informative variations when applied to static signature samples obtained before and after diagnosis. In particular, a measure called pixel density variation showed statistically significant differences ( ) between two comparison groups of remote signature recordings: earlier versus recent, based on independent and paired t-test analyses on a total of 40 signature samples. The quantitative framework developed here has the potential to be used in future controlled experiments to study micrographia and links to PD from various aspects, including monitoring and assessment of applied interventions and treatments. The inherent value in this methodology is further enhanced by its reliance solely on static signatures, not requiring dynamic sampling with specialized equipment.

  7. Feasibility of Recruiting a Diverse Sample of Men Who Have Sex with Men: Observation from Nanjing, China

    PubMed Central

    Tang, Weiming; Yang, Haitao; Mahapatra, Tanmay; Huan, Xiping; Yan, Hongjing; Li, Jianjun; Fu, Gengfeng; Zhao, Jinkou; Detels, Roger

    2013-01-01

    Background Respondent-driven-sampling (RDS) has well been recognized as a method for sampling from most hard-to-reach populations like commercial sex workers, drug users and men who have sex with men. However the feasibility of this sampling strategy in terms of recruiting a diverse spectrum of these hidden populations has not been understood well yet in developing countries. Methods In a cross sectional study in Nanjing city of Jiangsu province of China, 430 MSM were recruited including 9 seeds in 14 weeks of study period using RDS. Information regarding socio-demographic characteristics and sexual risk behavior were collected and testing was done for HIV and syphilis. Duration, completion, participant characteristics and the equilibrium of key factors were used for assessing feasibility of RDS. Homophily of key variables, socio-demographic distribution and social network size were used as the indicators of diversity. Results In the study sample, adjusted HIV and syphilis prevalence were 6.6% and 14.6% respectively. Majority (96.3%) of the participants were recruited by members of their own social network. Although there was a tendency for recruitment within the same self-identified group (homosexuals recruited 60.0% homosexuals), considerable cross-group recruitment (bisexuals recruited 52.3% homosexuals) was also seen. Homophily of the self-identified sexual orientations was 0.111 for homosexuals. Upon completion of the recruitment process, participant characteristics and the equilibrium of key factors indicated that RDS was feasible for sampling MSM in Nanjing. Participants recruited by RDS were found to be diverse after assessing the homophily of key variables in successive waves of recruitment, the proportion of characteristics after reaching equilibrium and the social network size. The observed design effects were nearly the same or even better than the theoretical design effect of 2. Conclusion RDS was found to be an efficient and feasible sampling method for recruiting a diverse sample of MSM in a reasonable time. PMID:24244280

  8. Mineral Element Contents in Commercially Valuable Fish Species in Spain

    PubMed Central

    Peña-Rivas, Luis; Ortega, Eduardo; López-Martínez, Concepción; Olea-Serrano, Fátima; Lorenzo, Maria Luisa

    2014-01-01

    The aim of this study was to measure selected metal concentrations in Trachurus trachurus, Trachurus picturatus, and Trachurus mediterraneus, which are widely consumed in Spain. Principal component analysis suggested that the variable Cr was the main responsible variable for the identification of T. trachurus, the variables As and Sn for T. mediterraneus, and the rest of variables for T. picturatus. This well-defined discrimination between fish species provided by mineral element allows us to distinguish them on the basis of their metal content. Based on the samples collected, and recognizing the inferential limitation of the sample size of this study, the metal concentrations found are below the proposed limit values for human consumption. However, it should be taken into consideration that there are other dietary sources of these metals. In conclusion, metal contents in the fish species analyzed are acceptable for human consumption from a nutritional and toxicity point of view. PMID:24895678

  9. Optical measurements for interfacial conduction and breakdown

    NASA Astrophysics Data System (ADS)

    Hebner, R. E., Jr.; Kelley, E. F.; Hagler, J. N.

    1983-01-01

    Measurements and calculations contributing to the understanding of space and surface charges in practical insulation systems are given. Calculations are presented which indicate the size of charge densities necessary to appreciably modify the electric field from what would be calculated from geometrical considerations alone. Experimental data is also presented which locates the breakdown in an electrode system with a paper sample bridging the gap between the electrodes. It is found that with careful handling, the breakdown does not necessarily occur along the interface even if heavily contaminated oil is used. The effects of space charge in the bulk liquid are electro-optically examined in nitrobenzene and transformer oil. Several levels of contamination in transformer oil are investigated. Whereas much space charge can be observed in nitrobenzene, very little space charge, if any, can be observed in the transformer oil samples even at temperatures near 100 degrees C.

  10. Novel Application of Confocal Laser Scanning Microscopy and 3D Volume Rendering toward Improving the Resolution of the Fossil Record of Charcoal

    PubMed Central

    Belcher, Claire M.; Punyasena, Surangi W.; Sivaguru, Mayandi

    2013-01-01

    Variations in the abundance of fossil charcoals between rocks and sediments are assumed to reflect changes in fire activity in Earth’s past. These variations in fire activity are often considered to be in response to environmental, ecological or climatic changes. The role that fire plays in feedbacks to such changes is becoming increasingly important to understand and highlights the need to create robust estimates of variations in fossil charcoal abundance. The majority of charcoal based fire reconstructions quantify the abundance of charcoal particles and do not consider the changes in the morphology of the individual particles that may have occurred due to fragmentation as part of their transport history. We have developed a novel application of confocal laser scanning microscopy coupled to image processing that enables the 3-dimensional reconstruction of individual charcoal particles. This method is able to measure the volume of both microfossil and mesofossil charcoal particles and allows the abundance of charcoal in a sample to be expressed as total volume of charcoal. The method further measures particle surface area and shape allowing both relationships between different size and shape metrics to be analysed and full consideration of variations in particle size and size sorting between different samples to be studied. We believe application of this new imaging approach could allow significant improvement in our ability to estimate variations in past fire activity using fossil charcoals. PMID:23977267

  11. Using Stochastic Approximation Techniques to Efficiently Construct Confidence Intervals for Heritability.

    PubMed

    Schweiger, Regev; Fisher, Eyal; Rahmani, Elior; Shenhav, Liat; Rosset, Saharon; Halperin, Eran

    2018-06-22

    Estimation of heritability is an important task in genetics. The use of linear mixed models (LMMs) to determine narrow-sense single-nucleotide polymorphism (SNP)-heritability and related quantities has received much recent attention, due of its ability to account for variants with small effect sizes. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. The common way to report the uncertainty in REML estimation uses standard errors (SEs), which rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals (CIs). In addition, for larger data sets (e.g., tens of thousands of individuals), the construction of SEs itself may require considerable time, as it requires expensive matrix inversions and multiplications. Here, we present FIESTA (Fast confidence IntErvals using STochastic Approximation), a method for constructing accurate CIs. FIESTA is based on parametric bootstrap sampling, and, therefore, avoids unjustified assumptions on the distribution of the heritability estimator. FIESTA uses stochastic approximation techniques, which accelerate the construction of CIs by several orders of magnitude, compared with previous approaches as well as to the analytical approximation used by SEs. FIESTA builds accurate CIs rapidly, for example, requiring only several seconds for data sets of tens of thousands of individuals, making FIESTA a very fast solution to the problem of building accurate CIs for heritability for all data set sizes.

  12. Why it is hard to find genes associated with social science traits: theoretical and empirical considerations.

    PubMed

    Chabris, Christopher F; Lee, James J; Benjamin, Daniel J; Beauchamp, Jonathan P; Glaeser, Edward L; Borst, Gregoire; Pinker, Steven; Laibson, David I

    2013-10-01

    We explain why traits of interest to behavioral scientists may have a genetic architecture featuring hundreds or thousands of loci with tiny individual effects rather than a few with large effects and why such an architecture makes it difficult to find robust associations between traits and genes. We conducted a genome-wide association study at 2 sites, Harvard University and Union College, measuring more than 100 physical and behavioral traits with a sample size typical of candidate gene studies. We evaluated predictions that alleles with large effect sizes would be rare and most traits of interest to social science are likely characterized by a lack of strong directional selection. We also carried out a theoretical analysis of the genetic architecture of traits based on R.A. Fisher's geometric model of natural selection and empirical analyses of the effects of selection bias and phenotype measurement stability on the results of genetic association studies. Although we replicated several known genetic associations with physical traits, we found only 2 associations with behavioral traits that met the nominal genome-wide significance threshold, indicating that physical and behavioral traits are mainly affected by numerous genes with small effects. The challenge for social science genomics is the likelihood that genes are connected to behavioral variation by lengthy, nonlinear, interactive causal chains, and unraveling these chains requires allying with personal genomics to take advantage of the potential for large sample sizes as well as continuing with traditional epidemiological studies.

  13. Prediction of accrual closure date in multi-center clinical trials with discrete-time Poisson process models.

    PubMed

    Tang, Gong; Kong, Yuan; Chang, Chung-Chou Ho; Kong, Lan; Costantino, Joseph P

    2012-01-01

    In a phase III multi-center cancer clinical trial or a large public health study, sample size is predetermined to achieve desired power, and study participants are enrolled from tens or hundreds of participating institutions. As the accrual is closing to the target size, the coordinating data center needs to project the accrual closure date on the basis of the observed accrual pattern and notify the participating sites several weeks in advance. In the past, projections were simply based on some crude assessment, and conservative measures were incorporated in order to achieve the target accrual size. This approach often resulted in excessive accrual size and subsequently unnecessary financial burden on the study sponsors. Here we proposed a discrete-time Poisson process-based method to estimate the accrual rate at time of projection and subsequently the trial closure date. To ensure that target size would be reached with high confidence, we also proposed a conservative method for the closure date projection. The proposed method was illustrated through the analysis of the accrual data of the National Surgical Adjuvant Breast and Bowel Project trial B-38. The results showed that application of the proposed method could help to save considerable amount of expenditure in patient management without compromising the accrual goal in multi-center clinical trials. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Is High Resolution Melting Analysis (HRMA) Accurate for Detection of Human Disease-Associated Mutations? A Meta Analysis

    PubMed Central

    Ma, Feng-Li; Jiang, Bo; Song, Xiao-Xiao; Xu, An-Gao

    2011-01-01

    Background High Resolution Melting Analysis (HRMA) is becoming the preferred method for mutation detection. However, its accuracy in the individual clinical diagnostic setting is variable. To assess the diagnostic accuracy of HRMA for human mutations in comparison to DNA sequencing in different routine clinical settings, we have conducted a meta-analysis of published reports. Methodology/Principal Findings Out of 195 publications obtained from the initial search criteria, thirty-four studies assessing the accuracy of HRMA were included in the meta-analysis. We found that HRMA was a highly sensitive test for detecting disease-associated mutations in humans. Overall, the summary sensitivity was 97.5% (95% confidence interval (CI): 96.8–98.5; I2 = 27.0%). Subgroup analysis showed even higher sensitivity for non-HR-1 instruments (sensitivity 98.7% (95%CI: 97.7–99.3; I2 = 0.0%)) and an eligible sample size subgroup (sensitivity 99.3% (95%CI: 98.1–99.8; I2 = 0.0%)). HRMA specificity showed considerable heterogeneity between studies. Sensitivity of the techniques was influenced by sample size and instrument type but by not sample source or dye type. Conclusions/Significance These findings show that HRMA is a highly sensitive, simple and low-cost test to detect human disease-associated mutations, especially for samples with mutations of low incidence. The burden on DNA sequencing could be significantly reduced by the implementation of HRMA, but it should be recognized that its sensitivity varies according to the number of samples with/without mutations, and positive results require DNA sequencing for confirmation. PMID:22194806

  15. Morphometrics of aeolian blowouts from high-resolution digital elevation data: methodological considerations, shape metrics, and scaling

    NASA Astrophysics Data System (ADS)

    Hamilton, T. K.; Duke, G.; Brown, O.; Koenig, D.; Barchyn, T. E.; Hugenholtz, C.

    2011-12-01

    Aeolian blowouts are wind erosion hollows that form in vegetated aeolian landscapes. They are especially pervasive in dunefields of the northern Great Plains, yielding highly pitted or hummocky terrain, and adding to the spatial variability of microenvironments. Their development is thought to be linked to feedbacks between morphology and airflow; however, few measurements are available to test this hypothesis. Currently, a dearth of morphology data is limiting modeling progress. From a systematic program of blowout mapping with high-resolution airborne LiDAR data, we used a GIS to calculate morphometrics for 1373 blowouts in Great Sand Hills, Saskatchewan, Canada. All of the blowouts selected for this investigation were covered by grassland vegetation and inactive; their morphology represents the final stage of evolution. We first outline methodological considerations for delineating blowouts and measuring their volume. In particular, we present an objective method to enhance edge and reduce operator error and bias. We show that blowouts are slightly elongate and 49% of the sample blowouts are oriented parallel to the prevailing westerly winds. We also show that their size distribution is heavy-tailed, meaning that most blowouts are relatively small and rarely increase in size beyond 400 m3. Given that blowout growth is dominated by a positive feedback between sediment transport and vegetation erosion, these results suggest several possible mechanisms: i) blowouts simultaneously evolved and stabilized as a result of external climate forcing, ii) blowouts are slaved to exogenous biogenic disturbance patterns (e.g., bison wallows), or iii) a morphodynamic limiting mechanism restricts blowout size. Overall, these data will serve as a foundation for future study, providing insight into an understudied landform that is common in many dunefields.

  16. Comparative study on the role of gelatin, chitosan and their combination as tissue engineered scaffolds on healing and regeneration of critical sized bone defects: an in vivo study.

    PubMed

    Oryan, Ahmad; Alidadi, Soodeh; Bigham-Sadegh, Amin; Moshiri, Ali

    2016-10-01

    Gelatin and chitosan are natural polymers that have extensively been used in tissue engineering applications. The present study aimed to evaluate the effectiveness of chitosan and gelatin or combination of the two biopolymers (chitosan-gelatin) as bone scaffold on bone regeneration process in an experimentally induced critical sized radial bone defect model in rats. Fifty radial bone defects were bilaterally created in 25 Wistar rats. The defects were randomly filled with chitosan, gelatin and chitosan-gelatin and autograft or left empty without any treatment (n = 10 in each group). The animals were examined by radiology and clinical evaluation before euthanasia. After 8 weeks, the rats were euthanized and their harvested healing bone samples were evaluated by radiology, CT-scan, biomechanical testing, gross pathology, histopathology, histomorphometry and scanning electron microscopy. Gelatin was biocompatible and biodegradable in vivo and showed superior biodegradation and biocompatibility when compared with chitosan and chitosan-gelatin scaffolds. Implantation of both the gelatin and chitosan-gelatin scaffolds in bone defects significantly increased new bone formation and mechanical properties compared with the untreated defects (P < 0.05). Combination of the gelatin and chitosan considerably increased structural and functional properties of the healing bones when compared to chitosan scaffold (P < 0.05). However, no significant differences were observed between the gelatin and gelatin-chitosan groups in these regards (P > 0.05). In conclusion, application of the gelatin alone or its combination with chitosan had beneficial effects on bone regeneration and could be considered as good options for bone tissue engineering strategies. However, chitosan alone was not able to promote considerable new bone formation in the experimentally induced critical-size radial bone defects.

  17. Cognitive and Occupational Function in Survivors of Adolescent Cancer.

    PubMed

    Nugent, Bethany D; Bender, Catherine M; Sereika, Susan M; Tersak, Jean M; Rosenzweig, Margaret

    2018-02-01

    Adolescents with cancer have unique developmental considerations. These include brain development, particularly in the frontal lobe, and a focus on completing education and entering the workforce. Cancer and treatment at this stage may prove to uniquely affect survivors' experience of cognitive and occupational function. An exploratory, cross-sectional, descriptive comparative study was employed to describe cognitive and occupational function in adult survivors of adolescent cancer (diagnosed between the ages of 15 and 21 years) and explore differences in age- and gender-matched controls. In total, 23 survivors and 14 controls participated in the study. While significant differences were not found between the groups on measures of cognitive and occupational function, several small and medium effect sizes were found suggesting that survivors may have greater difficulty than controls. Two small effect sizes were found in measures of neuropsychological performance (the Digit Vigilance test [d = 0.396] and Stroop test [d = 0.226]). Small and medium effect sizes ranging from 0.269 to 0.605 were found for aspects of perceived and total cognitive function. A small effect size was also found in work output (d = 0.367). While we did not find significant differences in cognitive or occupational function between survivors and controls, the effect sizes observed point to the need for future research. Future work using a larger sample size and longitudinal design are needed to further explore cognitive and occupational function in this vulnerable and understudied population and assist in the understanding of patterns of change over time.

  18. Are Small Schools Better? School Size Considerations for Safety & Learning. Policy Brief.

    ERIC Educational Resources Information Center

    McRobbie, Joan

    New studies from the 1990s have strengthened an already notable consensus on school size: smaller is better. This policy brief outlines research findings on why size makes a difference, how small is small enough, effective approaches to downsizing, and key barriers. No agreement exists at present on optimal school size, but research suggests a…

  19. 46 CFR 160.041-2 - Type and size.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Kits, First-Aid, for Merchant Vessels § 160.041-2 Type and size. (a) Type. First-aid kits covered by this specification shall be of the water-tight cabinet carrying... consideration. (b) Size. First-aid kits shall be of a size (approximately 9″×9″×21/2″ inside) adequate for...

  20. 46 CFR 160.041-2 - Type and size.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Kits, First-Aid, for Merchant Vessels § 160.041-2 Type and size. (a) Type. First-aid kits covered by this specification shall be of the water-tight cabinet carrying... consideration. (b) Size. First-aid kits shall be of a size (approximately 9″×9″×21/2″ inside) adequate for...

  1. 46 CFR 160.041-2 - Type and size.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Kits, First-Aid, for Merchant Vessels § 160.041-2 Type and size. (a) Type. First-aid kits covered by this specification shall be of the water-tight cabinet carrying... consideration. (b) Size. First-aid kits shall be of a size (approximately 9″ × 9″ × 21/2″ inside) adequate for...

  2. 46 CFR 160.041-2 - Type and size.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Kits, First-Aid, for Merchant Vessels § 160.041-2 Type and size. (a) Type. First-aid kits covered by this specification shall be of the water-tight cabinet carrying... consideration. (b) Size. First-aid kits shall be of a size (approximately 9″ × 9″ × 21/2″ inside) adequate for...

  3. 46 CFR 160.041-2 - Type and size.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Kits, First-Aid, for Merchant Vessels § 160.041-2 Type and size. (a) Type. First-aid kits covered by this specification shall be of the water-tight cabinet carrying... consideration. (b) Size. First-aid kits shall be of a size (approximately 9″×9″×21/2″ inside) adequate for...

  4. Extraction and isotopic analysis of medium molecular weight hydrocarbons from Murchison using supercritical carbon dioxide

    NASA Technical Reports Server (NTRS)

    Gilmour, Iain; Pillinger, Colin

    1993-01-01

    The large variety of organic compounds present in carbonaceous chondrites poses particular problems in their analysis not the least of which is terrestrial contamination. Conventional analytical approaches employ simple chromatographic techniques to fractionate the extractable compounds into broad classes of similar chemical structure. However, the use of organic solvents and their subsequent removal by evaporation results in the depletion or loss of semi-volatile compounds as well as requiring considerable preparative work to assure solvent purity. Supercritical fluids have been shown to provide a powerful alternative to conventional liquid organic solvents used for analytical extractions. A sample of Murchison from the Field Museum was analyzed. Two interior fragments were used; the first (2.85 g) was crushed in an agate pestel and mortar to a grain size of ca. 50-100 micron, the second (1.80 g) was broken into chips 3-8 mm in size. Each sample was loaded into a stainless steel bomb and placed in the extraction chamber of an Isco supercritical fluid extractor maintained at 35 C. High purity (99.9995 percent) carbon dioxide was used and was pressurized using an Isco syringe pump. The samples were extracted dynamically by flowing CO2 under pressure through the bomb and venting via a 50 micron fused filica capillary into 5 mls of hexane used as a collection solvent. The hexane was maintained at a temperature of 0.5 C. A series of extractions were done on each sample using CO2 of increasing density. The principal components extracted in each fraction are summarized.

  5. Extraction and isotopic analysis of medium molecular weight hydrocarbons from Murchison using supercritical carbon dioxide

    NASA Astrophysics Data System (ADS)

    Gilmour, Iain; Pillinger, Colin

    1993-03-01

    The large variety of organic compounds present in carbonaceous chondrites poses particular problems in their analysis not the least of which is terrestrial contamination. Conventional analytical approaches employ simple chromatographic techniques to fractionate the extractable compounds into broad classes of similar chemical structure. However, the use of organic solvents and their subsequent removal by evaporation results in the depletion or loss of semi-volatile compounds as well as requiring considerable preparative work to assure solvent purity. Supercritical fluids have been shown to provide a powerful alternative to conventional liquid organic solvents used for analytical extractions. A sample of Murchison from the Field Museum was analyzed. Two interior fragments were used; the first (2.85 g) was crushed in an agate pestel and mortar to a grain size of ca. 50-100 micron, the second (1.80 g) was broken into chips 3-8 mm in size. Each sample was loaded into a stainless steel bomb and placed in the extraction chamber of an Isco supercritical fluid extractor maintained at 35 C. High purity (99.9995 percent) carbon dioxide was used and was pressurized using an Isco syringe pump. The samples were extracted dynamically by flowing CO2 under pressure through the bomb and venting via a 50 micron fused filica capillary into 5 mls of hexane used as a collection solvent. The hexane was maintained at a temperature of 0.5 C. A series of extractions were done on each sample using CO2 of increasing density. The principal components extracted in each fraction are summarized.

  6. Comparison of endotoxin and particle bounce in Marple cascade samplers with and without impaction grease.

    PubMed

    Kirychuk, Shelley P; Reynolds, Stephen J; Koehncke, Niels; Nakatsu, J; Mehaffy, John

    2009-01-01

    The health of persons engaged in agricultural activities are often related or associated with environmental exposures in their workplace. Accurately measuring, analyzing, and reporting these exposures is paramount to outcomes interpretation. This paper describes issues related to sampling air in poultry barns with a cascade impactor. Specifically, the authors describe how particle bounce can affect measurement outcomes and how the use of impaction grease can impact particle bounce and laboratory analyses such as endotoxin measurements. This project was designed to (1) study the effect of particle bounce in Marple cascade impactors that use polyvinyl chloride (PVC) filters; (2) to determine the effect of impaction grease on endotoxin assays when sampling poultry barn dust. A pilot study was undertaken utilizing six-stage Marple cascade impactors with PVC filters. Distortion of particulate size distributions and the effects of impaction grease on endotoxin analysis in samples of poultry dust distributed into a wind tunnel were studied. Although there was no significant difference in the overall dust concentration between utilizing impaction grease and not, there was a greater than 50% decrease in the mass median aerodynamic diameter (MMAD) values when impaction grease was not utilized. There was no difference in airborne endotoxin concentration or endotoxin MMAD between filters treated with impaction grease and those not treated. The results indicate that particle bounce should be a consideration when sampling poultry barn dust with Marple samplers containing PVC filters with no impaction grease. Careful consideration should be given to the utilization of impaction grease on PVC filters, which will undergo endotoxin analysis, as there is potential for interference, particularly if high or low levels of endotoxin are anticipated.

  7. Chemical analyses of micrometre-sized solids by a miniature laser ablation/ionisation mass spectrometer (LMS)

    NASA Astrophysics Data System (ADS)

    Tulej, Marek; Wiesendanger, Reto; Neuland, Maike; Meyer, Stefan; Wurz, Peter; Neubeck, Anna; Ivarsson, Magnus; Riedo, Valentine; Moreno-Garcia, Pavel; Riedo, Andreas; Knopp, Gregor

    2017-04-01

    Investigation of elemental and isotope compositions of planetary solids with high spatial resolution are of considerable interest to current space research. Planetary materials are typically highly heterogenous and such studies can deliver detailed chemical information of individual sample components with the sizes down to a few micrometres. The results of such investigations can yield mineralogical surface context including mineralogy of individual grains or the elemental composition of of other objects embedded in the sample surface such as micro-sized fossils. The identification of bio-relevant material can follow by the detection of bio-relevant elements and their isotope fractionation effects [1, 2]. For chemical analysis of heterogenous solid surfaces we have combined a miniature laser ablation mass spectrometer (LMS) (mass resolution (m/Dm) 400-600; dynamic range 105-108) with in situ microscope-camera system (spatial resolution ˜2um, depth 10 um). The microscope helps to find the micrometre-sized solids across the surface sample for the direct mass spectrometric analysis by the LMS instrument. The LMS instrument combines an fs-laser ion source and a miniature reflectron-type time-of-flight mass spectrometer. The mass spectrometric analysis of the selected on the sample surface objects followed after ablation, atomisation and ionisation of the sample by a focussed laser radiation (775 nm, 180 fs, 1 kHz; the spot size of ˜20 um) [4, 5, 6]. Mass spectra of almost all elements (isotopes) present in the investigated location are measured instantaneously. A number of heterogenous rock samples containing micrometre-sized fossils and mineralogical grains were investigated with high selectivity and sensitivity. Chemical analyses of filamentous structures observed in carbonate veins (in harzburgite) and amygdales in pillow basalt lava can be well characterised chemically yielding elemental and isotope composition of these objects [7, 8]. The investigation can be prepared with high selectivity since the host composition is typically readily different comparing to that of the analysed objects. In depth chemical analysis (chemical profiling) is found in particularly helpful allowing relatively easy isolation of the chemical composition of the host from the investigated objects [6]. Hence, both he chemical analysis of the environment and microstructures can be derived. Analysis of the isotope compositions can be measured with high level of confidence, nevertheless, presence of cluster of similar masses can make sometimes this analysis difficult. Based on this work, we are confident that similar studies can be conducted in situ planetary surfaces delivering important chemical context and evidences on bio-relevant processes. [1] Summons et al., Astrobiology, 11, 157, 2011. [2] Wurz et al., Sol. Sys. Res. 46, 408, 2012. [3] Riedo et al., J. Anal. Atom. Spectrom. 28, 1256, 2013. [4] Riedo et al., J. Mass Spectrom.48, 1, 2013. [5] Tulej et al., Geostand. Geoanal. Res., 38, 423, 2014. [6] Grimaudo et al., Anal. Chem. 87, 2041, 2015 [7] Tulej et al., Astrobiology, 15, 1, 2015. [8] Neubeck et al., Int. J. Astrobiology, 15, 133, 2016.

  8. A model based on Rock-Eval thermal analysis to quantify the size of the centennially persistent organic carbon pool in temperate soils

    NASA Astrophysics Data System (ADS)

    Cécillon, Lauric; Baudin, François; Chenu, Claire; Houot, Sabine; Jolivet, Romain; Kätterer, Thomas; Lutfalla, Suzanne; Macdonald, Andy; van Oort, Folkert; Plante, Alain F.; Savignac, Florence; Soucémarianadin, Laure N.; Barré, Pierre

    2018-05-01

    Changes in global soil carbon stocks have considerable potential to influence the course of future climate change. However, a portion of soil organic carbon (SOC) has a very long residence time ( > 100 years) and may not contribute significantly to terrestrial greenhouse gas emissions during the next century. The size of this persistent SOC reservoir is presumed to be large. Consequently, it is a key parameter required for the initialization of SOC dynamics in ecosystem and Earth system models, but there is considerable uncertainty in the methods used to quantify it. Thermal analysis methods provide cost-effective information on SOC thermal stability that has been shown to be qualitatively related to SOC biogeochemical stability. The objective of this work was to build the first quantitative model of the size of the centennially persistent SOC pool based on thermal analysis. We used a unique set of 118 archived soil samples from four agronomic experiments in northwestern Europe with long-term bare fallow and non-bare fallow treatments (e.g., manure amendment, cropland and grassland) as a sample set for which estimating the size of the centennially persistent SOC pool is relatively straightforward. At each experimental site, we estimated the average concentration of centennially persistent SOC and its uncertainty by applying a Bayesian curve-fitting method to the observed declining SOC concentration over the duration of the long-term bare fallow treatment. Overall, the estimated concentrations of centennially persistent SOC ranged from 5 to 11 g C kg-1 of soil (lowest and highest boundaries of four 95 % confidence intervals). Then, by dividing the site-specific concentrations of persistent SOC by the total SOC concentration, we could estimate the proportion of centennially persistent SOC in the 118 archived soil samples and the associated uncertainty. The proportion of centennially persistent SOC ranged from 0.14 (standard deviation of 0.01) to 1 (standard deviation of 0.15). Samples were subjected to thermal analysis by Rock-Eval 6 that generated a series of 30 parameters reflecting their SOC thermal stability and bulk chemistry. We trained a nonparametric machine-learning algorithm (random forests multivariate regression model) to predict the proportion of centennially persistent SOC in new soils using Rock-Eval 6 thermal parameters as predictors. We evaluated the model predictive performance with two different strategies. We first used a calibration set (n = 88) and a validation set (n = 30) with soils from all sites. Second, to test the sensitivity of the model to pedoclimate, we built a calibration set with soil samples from three out of the four sites (n = 84). The multivariate regression model accurately predicted the proportion of centennially persistent SOC in the validation set composed of soils from all sites (R2 = 0.92, RMSEP = 0.07, n = 30). The uncertainty of the model predictions was quantified by a Monte Carlo approach that produced conservative 95 % prediction intervals across the validation set. The predictive performance of the model decreased when predicting the proportion of centennially persistent SOC in soils from one fully independent site with a different pedoclimate, yet the mean error of prediction only slightly increased (R2 = 0.53, RMSEP = 0.10, n = 34). This model based on Rock-Eval 6 thermal analysis can thus be used to predict the proportion of centennially persistent SOC with known uncertainty in new soil samples from different pedoclimates, at least for sites that have similar Rock-Eval 6 thermal characteristics to those included in the calibration set. Our study reinforces the evidence that there is a link between the thermal and biogeochemical stability of soil organic matter and demonstrates that Rock-Eval 6 thermal analysis can be used to quantify the size of the centennially persistent organic carbon pool in temperate soils.

  9. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Development of a Probabilistic Dynamic Synthesis Method for the Analysis of Nondeterministic Structures

    NASA Technical Reports Server (NTRS)

    Brown, A. M.

    1998-01-01

    Accounting for the statistical geometric and material variability of structures in analysis has been a topic of considerable research for the last 30 years. The determination of quantifiable measures of statistical probability of a desired response variable, such as natural frequency, maximum displacement, or stress, to replace experience-based "safety factors" has been a primary goal of these studies. There are, however, several problems associated with their satisfactory application to realistic structures, such as bladed disks in turbomachinery. These include the accurate definition of the input random variables (rv's), the large size of the finite element models frequently used to simulate these structures, which makes even a single deterministic analysis expensive, and accurate generation of the cumulative distribution function (CDF) necessary to obtain the probability of the desired response variables. The research presented here applies a methodology called probabilistic dynamic synthesis (PDS) to solve these problems. The PDS method uses dynamic characteristics of substructures measured from modal test as the input rv's, rather than "primitive" rv's such as material or geometric uncertainties. These dynamic characteristics, which are the free-free eigenvalues, eigenvectors, and residual flexibility (RF), are readily measured and for many substructures, a reasonable sample set of these measurements can be obtained. The statistics for these rv's accurately account for the entire random character of the substructure. Using the RF method of component mode synthesis, these dynamic characteristics are used to generate reduced-size sample models of the substructures, which are then coupled to form system models. These sample models are used to obtain the CDF of the response variable by either applying Monte Carlo simulation or by generating data points for use in the response surface reliability method, which can perform the probabilistic analysis with an order of magnitude less computational effort. Both free- and forced-response analyses have been performed, and the results indicate that, while there is considerable room for improvement, the method produces usable and more representative solutions for the design of realistic structures with a substantial savings in computer time.

  11. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    PubMed

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  12. Sample Size Estimation: The Easy Way

    ERIC Educational Resources Information Center

    Weller, Susan C.

    2015-01-01

    This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…

  13. The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education

    ERIC Educational Resources Information Center

    Slavin, Robert; Smith, Dewi

    2009-01-01

    Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…

  14. Seven ways to increase power without increasing N.

    PubMed

    Hansen, W B; Collins, L M

    1994-01-01

    Many readers of this monograph may wonder why a chapter on statistical power was included. After all, by now the issue of statistical power is in many respects mundane. Everyone knows that statistical power is a central research consideration, and certainly most National Institute on Drug Abuse grantees or prospective grantees understand the importance of including a power analysis in research proposals. However, there is ample evidence that, in practice, prevention researchers are not paying sufficient attention to statistical power. If they were, the findings observed by Hansen (1992) in a recent review of the prevention literature would not have emerged. Hansen (1992) examined statistical power based on 46 cohorts followed longitudinally, using nonparametric assumptions given the subjects' age at posttest and the numbers of subjects. Results of this analysis indicated that, in order for a study to attain 80-percent power for detecting differences between treatment and control groups, the difference between groups at posttest would need to be at least 8 percent (in the best studies) and as much as 16 percent (in the weakest studies). In order for a study to attain 80-percent power for detecting group differences in pre-post change, 22 of the 46 cohorts would have needed relative pre-post reductions of greater than 100 percent. Thirty-three of the 46 cohorts had less than 50-percent power to detect a 50-percent relative reduction in substance use. These results are consistent with other review findings (e.g., Lipsey 1990) that have shown a similar lack of power in a broad range of research topics. Thus, it seems that, although researchers are aware of the importance of statistical power (particularly of the necessity for calculating it when proposing research), they somehow are failing to end up with adequate power in their completed studies. This chapter argues that the failure of many prevention studies to maintain adequate statistical power is due to an overemphasis on sample size (N) as the only, or even the best, way to increase statistical power. It is easy to see how this overemphasis has come about. Sample size is easy to manipulate, has the advantage of being related to power in a straight-forward way, and usually is under the direct control of the researcher, except for limitations imposed by finances or subject availability. Another option for increasing power is to increase the alpha used for hypothesis-testing but, as very few researchers seriously consider significance levels much larger than the traditional .05, this strategy seldom is used. Of course, sample size is important, and the authors of this chapter are not recommending that researchers cease choosing sample sizes carefully. Rather, they argue that researchers should not confine themselves to increasing N to enhance power. It is important to take additional measures to maintain and improve power over and above making sure the initial sample size is sufficient. The authors recommend two general strategies. One strategy involves attempting to maintain the effective initial sample size so that power is not lost needlessly. The other strategy is to take measures to maximize the third factor that determines statistical power: effect size.

  15. Seabed mapping and characterization of sediment variability using the usSEABED data base

    USGS Publications Warehouse

    Goff, J.A.; Jenkins, C.J.; Jeffress, Williams S.

    2008-01-01

    We present a methodology for statistical analysis of randomly located marine sediment point data, and apply it to the US continental shelf portions of usSEABED mean grain size records. The usSEABED database, like many modern, large environmental datasets, is heterogeneous and interdisciplinary. We statistically test the database as a source of mean grain size data, and from it provide a first examination of regional seafloor sediment variability across the entire US continental shelf. Data derived from laboratory analyses ("extracted") and from word-based descriptions ("parsed") are treated separately, and they are compared statistically and deterministically. Data records are selected for spatial analysis by their location within sample regions: polygonal areas defined in ArcGIS chosen by geography, water depth, and data sufficiency. We derive isotropic, binned semivariograms from the data, and invert these for estimates of noise variance, field variance, and decorrelation distance. The highly erratic nature of the semivariograms is a result both of the random locations of the data and of the high level of data uncertainty (noise). This decorrelates the data covariance matrix for the inversion, and largely prevents robust estimation of the fractal dimension. Our comparison of the extracted and parsed mean grain size data demonstrates important differences between the two. In particular, extracted measurements generally produce finer mean grain sizes, lower noise variance, and lower field variance than parsed values. Such relationships can be used to derive a regionally dependent conversion factor between the two. Our analysis of sample regions on the US continental shelf revealed considerable geographic variability in the estimated statistical parameters of field variance and decorrelation distance. Some regional relationships are evident, and overall there is a tendency for field variance to be higher where the average mean grain size is finer grained. Surprisingly, parsed and extracted noise magnitudes correlate with each other, which may indicate that some portion of the data variability that we identify as "noise" is caused by real grain size variability at very short scales. Our analyses demonstrate that by applying a bias-correction proxy, usSEABED data can be used to generate reliable interpolated maps of regional mean grain size and sediment character. 

  16. Analysis of survival data from telemetry projects

    USGS Publications Warehouse

    Bunck, C.M.; Winterstein, S.R.; Pollock, K.H.

    1985-01-01

    Telemetry techniques can be used to study the survival rates of animal populations and are particularly suitable for species or settings for which band recovery models are not. Statistical methods for estimating survival rates and parameters of survival distributions from observations of radio-tagged animals will be described. These methods have been applied to medical and engineering studies and to the study of nest success. Estimates and tests based on discrete models, originally introduced by Mayfield, and on continuous models, both parametric and nonparametric, will be described. Generalizations, including staggered entry of subjects into the study and identification of mortality factors will be considered. Additional discussion topics will include sample size considerations, relocation frequency for subjects, and use of covariates.

  17. On the crystallization of polymer composites with inorganic fullerene-like particles.

    PubMed

    Enyashin, Andrey N; Glazyrina, Polina Yu

    2012-05-21

    The effect of a sulfide fullerene-like particle embedded into a polymer has been studied by molecular dynamics simulations on the nanosecond time scale using a mesoscopic Van der Waals force field evaluated for the case of a spherical particle. Even in this approach, neglecting the atomistic features of the surface, the inorganic particle acts as a nucleation agent facilitating the crystallization of the polymeric sample. A consideration of the Van der Waals force field of multi-walled sulfide nanoparticles suggests that in the absence of chemical interactions the size of the nanoparticle is dominating for the adhesion strength, while the number of sulfide layers composing the cage does not play a role.

  18. Phylogenetic effective sample size.

    PubMed

    Bartoszek, Krzysztof

    2016-10-21

    In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Penile size and penile enlargement surgery: a review.

    PubMed

    Dillon, B E; Chama, N B; Honig, S C

    2008-01-01

    Penile size is a considerable concern for men of all ages. Herein, we review the data on penile size and conditions that will result in penile shortening. Penile augmentation procedures are discussed, including indications, procedures and complications of penile lengthening procedures, penile girth enhancement procedures and penile skin reconstruction.

  20. 4 CFR 21.5 - Protest issues not for consideration.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... official to file a protest or not to file a protest in connection with a public-private competition. [61 FR... business size standards and North American Industry Classification System (NAICS) standards. Challenges of established size standards or the size status of particular firms, and challenges of the selected NAICS code...

  1. Multiplexing of ChIP-Seq Samples in an Optimized Experimental Condition Has Minimal Impact on Peak Detection.

    PubMed

    Kacmarczyk, Thadeous J; Bourque, Caitlin; Zhang, Xihui; Jiang, Yanwen; Houvras, Yariv; Alonso, Alicia; Betel, Doron

    2015-01-01

    Multiplexing samples in sequencing experiments is a common approach to maximize information yield while minimizing cost. In most cases the number of samples that are multiplexed is determined by financial consideration or experimental convenience, with limited understanding on the effects on the experimental results. Here we set to examine the impact of multiplexing ChIP-seq experiments on the ability to identify a specific epigenetic modification. We performed peak detection analyses to determine the effects of multiplexing. These include false discovery rates, size, position and statistical significance of peak detection, and changes in gene annotation. We found that, for histone marker H3K4me3, one can multiplex up to 8 samples (7 IP + 1 input) at ~21 million single-end reads each and still detect over 90% of all peaks found when using a full lane for sample (~181 million reads). Furthermore, there are no variations introduced by indexing or lane batch effects and importantly there is no significant reduction in the number of genes with neighboring H3K4me3 peaks. We conclude that, for a well characterized antibody and, therefore, model IP condition, multiplexing 8 samples per lane is sufficient to capture most of the biological signal.

  2. Multiplexing of ChIP-Seq Samples in an Optimized Experimental Condition Has Minimal Impact on Peak Detection

    PubMed Central

    Kacmarczyk, Thadeous J.; Bourque, Caitlin; Zhang, Xihui; Jiang, Yanwen; Houvras, Yariv; Alonso, Alicia; Betel, Doron

    2015-01-01

    Multiplexing samples in sequencing experiments is a common approach to maximize information yield while minimizing cost. In most cases the number of samples that are multiplexed is determined by financial consideration or experimental convenience, with limited understanding on the effects on the experimental results. Here we set to examine the impact of multiplexing ChIP-seq experiments on the ability to identify a specific epigenetic modification. We performed peak detection analyses to determine the effects of multiplexing. These include false discovery rates, size, position and statistical significance of peak detection, and changes in gene annotation. We found that, for histone marker H3K4me3, one can multiplex up to 8 samples (7 IP + 1 input) at ~21 million single-end reads each and still detect over 90% of all peaks found when using a full lane for sample (~181 million reads). Furthermore, there are no variations introduced by indexing or lane batch effects and importantly there is no significant reduction in the number of genes with neighboring H3K4me3 peaks. We conclude that, for a well characterized antibody and, therefore, model IP condition, multiplexing 8 samples per lane is sufficient to capture most of the biological signal. PMID:26066343

  3. Accident prediction model for public highway-rail grade crossings.

    PubMed

    Lu, Pan; Tolliver, Denver

    2016-05-01

    Considerable research has focused on roadway accident frequency analysis, but relatively little research has examined safety evaluation at highway-rail grade crossings. Highway-rail grade crossings are critical spatial locations of utmost importance for transportation safety because traffic crashes at highway-rail grade crossings are often catastrophic with serious consequences. The Poisson regression model has been employed to analyze vehicle accident frequency as a good starting point for many years. The most commonly applied variations of Poisson including negative binomial, and zero-inflated Poisson. These models are used to deal with common crash data issues such as over-dispersion (sample variance is larger than the sample mean) and preponderance of zeros (low sample mean and small sample size). On rare occasions traffic crash data have been shown to be under-dispersed (sample variance is smaller than the sample mean) and traditional distributions such as Poisson or negative binomial cannot handle under-dispersion well. The objective of this study is to investigate and compare various alternate highway-rail grade crossing accident frequency models that can handle the under-dispersion issue. The contributions of the paper are two-fold: (1) application of probability models to deal with under-dispersion issues and (2) obtain insights regarding to vehicle crashes at public highway-rail grade crossings. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Nanomechanical Characterization of Temperature-Dependent Mechanical Properties of Ion-Irradiated Zirconium with Consideration of Microstructure and Surface Damage

    NASA Astrophysics Data System (ADS)

    Marsh, Jonathan; Zhang, Yang; Verma, Devendra; Biswas, Sudipta; Haque, Aman; Tomar, Vikas

    2015-12-01

    Zirconium alloys for nuclear applications with different microstructures were produced by manufacturing processes such as chipping, rolling and annealing. The two Zr samples, rolled and rolled-annealed were subjected to different levels of irradiation, 1 keV and 100 eV, to study the effect of irradiation dosages. The effect of microstructure and irradiation on the mechanical properties (reduced modulus, hardness, indentation yield strength) was analyzed with nanoindentation experiments, which were carried out in the temperature range of 25°C to 450°C to investigate temperature dependence. An indentation size effect analysis was performed and the mechanical properties were also corrected for the oxidation effects at high temperatures. The irradiation-induced hardness was observed, with rolled samples exhibiting higher increase compared to rolled and annealed samples. The relevant material parameters of the Anand viscoplastic model were determined for Zr samples containing different level of irradiation to account for viscoplasticity at high temperatures. The effect of the microstructure and irradiation on the stress-strain curve along with the influence of temperature on the mechanisms of irradiation creep such as formation of vacancies and interstitials is presented. The yield strength of irradiated samples was found to be higher than the unirradiated samples which also showed a decreasing trend with the temperature.

  5. Tribocorrosion behaviour of nanostructured titanium substrates processed by high-pressure torsion

    NASA Astrophysics Data System (ADS)

    Faghihi, S.; Li, D.; Szpunar, J. A.

    2010-12-01

    Aseptic loosening induced by wear particles from artificial bearing materials is one of the main causes of malfunctioning in total hip replacements. With the increase in young and active patients, complications in revision surgeries and immense health care costs, there is considerable interest in wear-resistant materials that can endure longer in the harsh and corrosive body environment. Here, the tribological behaviour of nanostructured titanium substrates processed by high-pressure torsion (HPT) is investigated and compared with the coarse-grained samples. The high resolution transmission electron microscopy reveals that a nanostructured sample has a grain size of 5-10 nm compared to that of ~ 10 µm and ~ 50 µm for untreated and annealed substrates, respectively. Dry and wet wear tests were performed using a linear reciprocating ball-on-flat tribometer. Nanostructured samples show the best dry wear resistance and the lowest wear rate in the electrolyte. There was significantly lower plastic deformation and no change in preferred orientation of nanostructured samples attributable to the wear process. Electrochemical impedance spectroscopy (EIS) shows lower corrosion resistance for nanostructured samples. However, under the action of both wear and corrosion the nanostructured samples show superior performance and that makes them an attractive candidate for applications in which wear and corrosion act simultaneously.

  6. The endothelial sample size analysis in corneal specular microscopy clinical examinations.

    PubMed

    Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci

    2012-05-01

    To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.

  7. Automated storm water sampling on small watersheds

    USGS Publications Warehouse

    Harmel, R.D.; King, K.W.; Slade, R.M.

    2003-01-01

    Few guidelines are currently available to assist in designing appropriate automated storm water sampling strategies for small watersheds. Therefore, guidance is needed to develop strategies that achieve an appropriate balance between accurate characterization of storm water quality and loads and limitations of budget, equipment, and personnel. In this article, we explore the important sampling strategy components (minimum flow threshold, sampling interval, and discrete versus composite sampling) and project-specific considerations (sampling goal, sampling and analysis resources, and watershed characteristics) based on personal experiences and pertinent field and analytical studies. These components and considerations are important in achieving the balance between sampling goals and limitations because they determine how and when samples are taken and the potential sampling error. Several general recommendations are made, including: setting low minimum flow thresholds, using flow-interval or variable time-interval sampling, and using composite sampling to limit the number of samples collected. Guidelines are presented to aid in selection of an appropriate sampling strategy based on user's project-specific considerations. Our experiences suggest these recommendations should allow implementation of a successful sampling strategy for most small watershed sampling projects with common sampling goals.

  8. Low‐pathogenic notifiable avian influenza serosurveillance and the risk of infection in poultry – a critical review of the European Union active surveillance programme (2005–2007)

    PubMed Central

    Gonzales, J. L.; Elbers, A. R. W.; Bouma, A.; Koch, G.; De Wit, J. J.; Stegeman, J. A.

    2010-01-01

    Please cite this paper as: Gonzales et al. (2010) Low‐pathogenic notifiable avian influenza serosurveillance and the risk of infection in poultry – a critical review of the European Union active surveillance programme (2005–2007). Influenza and Other Respiratory Viruses 4(2), 91–99. Background  Since 2003, Member States (MS) of the European Union (EU) have implemented serosurveillance programmes for low pathogenic notifiable avian influenza (LPNAI) in poultry. To date, there is the need to evaluate the surveillance activity in order to optimize the programme’s surveillance design. Objectives  To evaluate MS sampling operations [sample size and targeted poultry types (PTs)] and its relation with the probability of detection and to estimate the PTs relative risk (RR) of being infected. Methods  Reported data of the surveillance carried out from 2005 to 2007 were analyzed using: (i) descriptive indicators to characterize both MS sampling operations and its relation with the probability of detection and the LPNAI epidemiological situation, and (ii) multivariable methods to estimate each PTs RR of being infected. Results  Member States sampling a higher sample size than that recommended by the EU had a significantly higher probability of detection. Poultry types with ducks & geese, game‐birds, ratites and “others” had a significant higher RR of being seropositive than chicken categories. The seroprevalence in duck & geese and game‐bird holdings appears to be higher than 5%, which is the EU‐recommended design prevalence (DP), while in chicken and turkey categories the seroprevalence was considerably lower than 5% and with that there is the risk of missing LPNAI seropositive holdings. Conclusion  It is recommended that the European Commission discusses with its MS whether the results of our evaluation calls for refinement of the surveillance characteristics such as sampling frequency, the between‐holding DP and MS sampling operation strategies. PMID:20167049

  9. Accounting for twin births in sample size calculations for randomised trials.

    PubMed

    Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J

    2018-05-04

    Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.

  10. Influence of arsenic flow on the crystal structure of epitaxial GaAs grown at low temperatures on GaAs (100) and (111)A substrates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galiev, G. B.; Klimov, E. A.; Vasiliev, A. L.

    The influence of arsenic flow in a growth chamber on the crystal structure of GaAs grown by molecular-beam epitaxy at a temperature of 240°C on GaAs (100) and (111)A substrates has been investigated. The flow ratio γ of arsenic As4 and gallium was varied in the range from 16 to 50. GaAs films were either undoped, or homogeneously doped with silicon, or contained three equidistantly spaced silicon δ-layers. The structural quality of the annealed samples has been investigated by transmission electron microscopy. It is established for the first time that silicon δ-layers in “low-temperature” GaAs serve as formation centers ofmore » arsenic precipitates. Their average size, concentration, and spatial distribution are estimated. The dependence of the film structural quality on γ is analyzed. Regions 100–150 nm in size have been revealed in some samples and identified (by X-ray microanalysis) as pores. It is found that, in the entire range of γ under consideration, GaAs films on (111)A substrates have a poorer structural quality and become polycrystalline beginning with a thickness of 150–200 nm.« less

  11. Two-stage phase II oncology designs using short-term endpoints for early stopping.

    PubMed

    Kunz, Cornelia U; Wason, James Ms; Kieser, Meinhard

    2017-08-01

    Phase II oncology trials are conducted to evaluate whether the tumour activity of a new treatment is promising enough to warrant further investigation. The most commonly used approach in this context is a two-stage single-arm design with binary endpoint. As for all designs with interim analysis, its efficiency strongly depends on the relation between recruitment rate and follow-up time required to measure the patients' outcomes. Usually, recruitment is postponed after the sample size of the first stage is achieved up until the outcomes of all patients are available. This may lead to a considerable increase of the trial length and with it to a delay in the drug development process. We propose a design where an intermediate endpoint is used in the interim analysis to decide whether or not the study is continued with a second stage. Optimal and minimax versions of this design are derived. The characteristics of the proposed design in terms of type I error rate, power, maximum and expected sample size as well as trial duration are investigated. Guidance is given on how to select the most appropriate design. Application is illustrated by a phase II oncology trial in patients with advanced angiosarcoma, which motivated this research.

  12. Overcoming the winner's curse: estimating penetrance parameters from case-control data.

    PubMed

    Zollner, Sebastian; Pritchard, Jonathan K

    2007-04-01

    Genomewide association studies are now a widely used approach in the search for loci that affect complex traits. After detection of significant association, estimates of penetrance and allele-frequency parameters for the associated variant indicate the importance of that variant and facilitate the planning of replication studies. However, when these estimates are based on the original data used to detect the variant, the results are affected by an ascertainment bias known as the "winner's curse." The actual genetic effect is typically smaller than its estimate. This overestimation of the genetic effect may cause replication studies to fail because the necessary sample size is underestimated. Here, we present an approach that corrects for the ascertainment bias and generates an estimate of the frequency of a variant and its penetrance parameters. The method produces a point estimate and confidence region for the parameter estimates. We study the performance of this method using simulated data sets and show that it is possible to greatly reduce the bias in the parameter estimates, even when the original association study had low power. The uncertainty of the estimate decreases with increasing sample size, independent of the power of the original test for association. Finally, we show that application of the method to case-control data can improve the design of replication studies considerably.

  13. Density-Dependent Quantized Least Squares Support Vector Machine for Large Data Sets.

    PubMed

    Nan, Shengyu; Sun, Lei; Chen, Badong; Lin, Zhiping; Toh, Kar-Ann

    2017-01-01

    Based on the knowledge that input data distribution is important for learning, a data density-dependent quantization scheme (DQS) is proposed for sparse input data representation. The usefulness of the representation scheme is demonstrated by using it as a data preprocessing unit attached to the well-known least squares support vector machine (LS-SVM) for application on big data sets. Essentially, the proposed DQS adopts a single shrinkage threshold to obtain a simple quantization scheme, which adapts its outputs to input data density. With this quantization scheme, a large data set is quantized to a small subset where considerable sample size reduction is generally obtained. In particular, the sample size reduction can save significant computational cost when using the quantized subset for feature approximation via the Nyström method. Based on the quantized subset, the approximated features are incorporated into LS-SVM to develop a data density-dependent quantized LS-SVM (DQLS-SVM), where an analytic solution is obtained in the primal solution space. The developed DQLS-SVM is evaluated on synthetic and benchmark data with particular emphasis on large data sets. Extensive experimental results show that the learning machine incorporating DQS attains not only high computational efficiency but also good generalization performance.

  14. The musculoskeletal consequences of breast reconstruction using the latissimus dorsi muscle for women following mastectomy for breast cancer: A critical review.

    PubMed

    Blackburn, N E; Mc Veigh, J G; Mc Caughan, E; Wilson, I M

    2018-03-01

    Breast reconstruction using the latissimus dorsi (LD) flap following mastectomy is an important management option in breast cancer. However, one common, but often ignored, complication following LD flap is shoulder dysfunction. The aim of this critical review was to comprehensively assess the musculoskeletal impact of LD breast reconstruction and evaluate the functional outcome following surgery. Five electronic databases were searched including; Medline, Embase, CINAHL Plus (Cumulative Index to Nursing and Allied Health), PubMed and Web of Science. Databases were searched from 2006 to 2016, and only full text, English language articles were included. Twenty-two observational studies and two surveys were reviewed with sample sizes ranging from six to 206 participants. The majority of studies had small sample sizes and were retrospective in nature. Nevertheless, there is evidence to suggest that there is some degree of weakness and reduced mobility at the shoulder following LD muscle transfer. The literature demonstrates that there is considerable morbidity in the immediate post-operative period with functional recovery varying between studies. The majority of work tends to be limited and often gives conflicting results; therefore, further investigation is required in order to determine underlying factors that contribute to a reduction in function and activities of daily living. © 2017 John Wiley & Sons Ltd.

  15. Sample size determination for mediation analysis of longitudinal data.

    PubMed

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  16. Fluctuations in energy loss and their implications for dosimetry and radiobiology

    NASA Technical Reports Server (NTRS)

    Baily, N. A.; Steigerwalt, J. E.

    1972-01-01

    Serious consideration of the physics of energy deposition indicates that a fundamental change in the interpretation of absorbed dose is required at least for considerations of effects in biological systems. In addition, theoretical approaches to radiobiology and microdosimetry seem to require statistical considerations incorporating frequency distributions of the magnitude of the event sizes within the volume of interest.

  17. An Analysis of Scalable GPU-Based Ray-Guided Volume Rendering

    PubMed Central

    Fogal, Thomas; Schiewe, Alexander; Krüger, Jens

    2014-01-01

    Volume rendering continues to be a critical method for analyzing large-scale scalar fields, in disciplines as diverse as biomedical engineering and computational fluid dynamics. Commodity desktop hardware has struggled to keep pace with data size increases, challenging modern visualization software to deliver responsive interactions for O(N3) algorithms such as volume rendering. We target the data type common in these domains: regularly-structured data. In this work, we demonstrate that the major limitation of most volume rendering approaches is their inability to switch the data sampling rate (and thus data size) quickly. Using a volume renderer inspired by recent work, we demonstrate that the actual amount of visualizable data for a scene is typically bound considerably lower than the memory available on a commodity GPU. Our instrumented renderer is used to investigate design decisions typically swept under the rug in volume rendering literature. The renderer is freely available, with binaries for all major platforms as well as full source code, to encourage reproduction and comparison with future research. PMID:25506079

  18. Quantification and varietal variation of low molecular weight glutenin subunits (LMW-GS) using size exclusion chromatography.

    PubMed

    Dangi, Priya; Khatkar, B S

    2018-03-01

    Crude glutenin of four commercial wheat varieties viz. C 306, HI 977, HW 2004 and PBW 550 of diverse origin and breadmaking quality were fractionated by size-exclusion chromatography into three fractions of decreasing molecular weights. The relative quantity of peak II, containing LMW-GS specifically, varied considerably among the varieties as reflected from their discrete SEC profiles. The area % of peak II, containing protein of interest, was maximal for C 306 (22.08%) followed by PBW 550 (15.86%). The least proportion of LMW-GS were recovered from variety HW 2004 (9.68%). As the concentration of the sample extract injected to the column increased, the resolution of the peak declined in association with the slight shifting of retention time to the higher values. The best results were obtained for variety C 306 at 100 mg protein concentration with 3 M urea buffer. Consequently, the optimized conditions for purification of LMW-GS in appreciable amounts using SEC were established.

  19. Public Opinion Polls, Chicken Soup and Sample Size

    ERIC Educational Resources Information Center

    Nguyen, Phung

    2005-01-01

    Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.

  20. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    PubMed

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.

  1. Quantifying the potential impact of measurement error in an investigation of autism spectrum disorder (ASD).

    PubMed

    Heavner, Karyn; Newschaffer, Craig; Hertz-Picciotto, Irva; Bennett, Deborah; Burstyn, Igor

    2014-05-01

    The Early Autism Risk Longitudinal Investigation (EARLI), an ongoing study of a risk-enriched pregnancy cohort, examines genetic and environmental risk factors for autism spectrum disorders (ASDs). We simulated the potential effects of both measurement error (ME) in exposures and misclassification of ASD-related phenotype (assessed as Autism Observation Scale for Infants (AOSI) scores) on measures of association generated under this study design. We investigated the impact on the power to detect true associations with exposure and the false positive rate (FPR) for a non-causal correlate of exposure (X2, r=0.7) for continuous AOSI score (linear model) versus dichotomised AOSI (logistic regression) when the sample size (n), degree of ME in exposure, and strength of the expected (true) OR (eOR)) between exposure and AOSI varied. Exposure was a continuous variable in all linear models and dichotomised at one SD above the mean in logistic models. Simulations reveal complex patterns and suggest that: (1) There was attenuation of associations that increased with eOR and ME; (2) The FPR was considerable under many scenarios; and (3) The FPR has a complex dependence on the eOR, ME and model choice, but was greater for logistic models. The findings will stimulate work examining cost-effective strategies to reduce the impact of ME in realistic sample sizes and affirm the importance for EARLI of investment in biological samples that help precisely quantify a wide range of environmental exposures.

  2. Morphometric variation in the papionin muzzle and the biochronology of the South African Plio-Pleistocene karst cave deposits.

    PubMed

    Gilbert, Christopher C; Grine, Frederick E

    2010-03-01

    Papionin monkeys are widespread, relatively common members of Plio-Pleistocene faunal assemblages across Africa. For these reasons, papionin taxa have been used as biochronological indicators by which to infer the ages of the South African karst cave deposits. A recent morphometric study of South African fossil papionin muzzle shape concluded that its variation attests to a substantial and greater time depth for these sites than is generally estimated. This inference is significant, because accurate dating of the South African cave sites is critical to our knowledge of hominin evolution and mammalian biogeographic history. We here report the results of a comparative analysis of extant papionin monkeys by which variability of the South African fossil papionins may be assessed. The muzzles of 106 specimens representing six extant papionin genera were digitized and interlandmark distances were calculated. Results demonstrate that the overall amount of morphological variation present within the fossil assemblage fits comfortably within the range exhibited by the extant sample. We also performed a statistical experiment to assess the limitations imposed by small sample sizes, such as typically encountered in the fossil record. Results suggest that 15 specimens are sufficient to accurately represent the population mean for a given phenotype, but small sample sizes are insufficient to permit the accurate estimation of the population standard deviation, variance, and range. The suggestion that the muzzle morphology of fossil papionins attests to a considerable and previously unrecognized temporal depth of the South African karst cave sites is unwarranted.

  3. Assessing readability formula differences with written health information materials: application, results, and recommendations.

    PubMed

    Wang, Lih-Wern; Miller, Michael J; Schmitt, Michael R; Wen, Frances K

    2013-01-01

    Readability formulas are often used to guide the development and evaluation of literacy-sensitive written health information. However, readability formula results may vary considerably as a result of differences in software processing algorithms and how each formula is applied. These variations complicate interpretations of reading grade level estimates, particularly without a uniform guideline for applying and interpreting readability formulas. This research sought to (1) identify commonly used readability formulas reported in the health care literature, (2) demonstrate the use of the most commonly used readability formulas on written health information, (3) compare and contrast the differences when applying common readability formulas to identical selections of written health information, and (4) provide recommendations for choosing an appropriate readability formula for written health-related materials to optimize their use. A literature search was conducted to identify the most commonly used readability formulas in health care literature. Each of the identified formulas was subsequently applied to word samples from 15 unique examples of written health information about the topic of depression and its treatment. Readability estimates from common readability formulas were compared based on text sample size, selection, formatting, software type, and/or hand calculations. Recommendations for their use were provided. The Flesch-Kincaid formula was most commonly used (57.42%). Readability formulas demonstrated variability up to 5 reading grade levels on the same text. The Simple Measure of Gobbledygook (SMOG) readability formula performed most consistently. Depending on the text sample size, selection, formatting, software, and/or hand calculations, the individual readability formula estimated up to 6 reading grade levels of variability. The SMOG formula appears best suited for health care applications because of its consistency of results, higher level of expected comprehension, use of more recent validation criteria for determining reading grade level estimates, and simplicity of use. To improve interpretation of readability results, reporting reading grade level estimates from any formula should be accompanied with information about word sample size, location of word sampling in the text, formatting, and method of calculation. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Revealing the influence of water-cement ratio on the pore size distribution in hydrated cement paste by using cyclohexane

    NASA Astrophysics Data System (ADS)

    Bede, Andrea; Ardelean, Ioan

    2017-12-01

    Varying the amount of water in a concrete mix will influence its final properties considerably due to the changes in the capillary porosity. That is why a non-destructive technique is necessary for revealing the capillary pore distribution inside hydrated cement based materials and linking the capillary porosity with the macroscopic properties of these materials. In the present work, we demonstrate a simple approach for revealing the differences in capillary pore size distributions introduced by the preparation of cement paste with different water-to-cement ratios. The approach relies on monitoring the nuclear magnetic resonance transverse relaxation distribution of cyclohexane molecules confined inside the cement paste pores. The technique reveals the whole spectrum of pores inside the hydrated cement pastes, allowing a qualitative and quantitative analysis of different pore sizes. The cement pastes with higher water-to-cement ratios show an increase in capillary porosity, while for all the samples the intra-C-S-H and inter-C-S-H pores (also known as gel pores) remain unchanged. The technique can be applied to various porous materials with internal mineral surfaces.

  5. Triboelectric charging of volcanic ash from the 2011 Grímsvötn eruption.

    PubMed

    Houghton, Isobel M P; Aplin, Karen L; Nicoll, Keri A

    2013-09-13

    The plume from the 2011 eruption of Grímsvötn was highly electrically charged, as shown by the considerable lightning activity measured by the United Kingdom Met Office's low-frequency lightning detection network. Previous measurements of volcanic plumes have shown that ash particles are electrically charged up to hundreds of kilometers away from the vent, which indicates that the ash continues to charge in the plume [R. G. Harrison, K. A. Nicoll, Z. Ulanowski, and T. A. Mather, Environ. Res. Lett. 5, 024004 (2010); H. Hatakeyama J. Meteorol. Soc. Jpn. 27, 372 (1949)]. In this Letter, we study triboelectric charging of different size fractions of a sample of volcanic ash experimentally. Consistently with previous work, we find that the particle size distribution is a determining factor in the charging. Specifically, our laboratory experiments demonstrate that the normalized span of the particle size distribution plays an important role in the magnitude of charging generated. The influence of the normalized span on plume charging suggests that all ash plumes are likely to be charged, with implications for remote sensing and plume lifetime through scavenging effects.

  6. Defining habitat covariates in camera-trap based occupancy studies

    PubMed Central

    Niedballa, Jürgen; Sollmann, Rahel; Mohamed, Azlan bin; Bender, Johannes; Wilting, Andreas

    2015-01-01

    In species-habitat association studies, both the type and spatial scale of habitat covariates need to match the ecology of the focal species. We assessed the potential of high-resolution satellite imagery for generating habitat covariates using camera-trapping data from Sabah, Malaysian Borneo, within an occupancy framework. We tested the predictive power of covariates generated from satellite imagery at different resolutions and extents (focal patch sizes, 10–500 m around sample points) on estimates of occupancy patterns of six small to medium sized mammal species/species groups. High-resolution land cover information had considerably more model support for small, patchily distributed habitat features, whereas it had no advantage for large, homogeneous habitat features. A comparison of different focal patch sizes including remote sensing data and an in-situ measure showed that patches with a 50-m radius had most support for the target species. Thus, high-resolution satellite imagery proved to be particularly useful in heterogeneous landscapes, and can be used as a surrogate for certain in-situ measures, reducing field effort in logistically challenging environments. Additionally, remote sensed data provide more flexibility in defining appropriate spatial scales, which we show to impact estimates of wildlife-habitat associations. PMID:26596779

  7. Triboelectric Charging of Volcanic Ash from the 2011 Grímsvötn Eruption

    NASA Astrophysics Data System (ADS)

    Houghton, Isobel M. P.; Aplin, Karen L.; Nicoll, Keri A.

    2013-09-01

    The plume from the 2011 eruption of Grímsvötn was highly electrically charged, as shown by the considerable lightning activity measured by the United Kingdom Met Office’s low-frequency lightning detection network. Previous measurements of volcanic plumes have shown that ash particles are electrically charged up to hundreds of kilometers away from the vent, which indicates that the ash continues to charge in the plume [R. G. Harrison, K. A. Nicoll, Z. Ulanowski, and T. A. Mather, Environ. Res. Lett. 5, 024004 (2010)1748-932610.1088/1748-9326/5/2/024004; H. Hatakeyama J. Meteorol. Soc. Jpn. 27, 372 (1949)JMSJAU0026-1165]. In this Letter, we study triboelectric charging of different size fractions of a sample of volcanic ash experimentally. Consistently with previous work, we find that the particle size distribution is a determining factor in the charging. Specifically, our laboratory experiments demonstrate that the normalized span of the particle size distribution plays an important role in the magnitude of charging generated. The influence of the normalized span on plume charging suggests that all ash plumes are likely to be charged, with implications for remote sensing and plume lifetime through scavenging effects.

  8. Specific-age group sex estimation of infants through geometric morphometrics analysis of pubis and ischium.

    PubMed

    Estévez Campo, Enrique José; López-Lázaro, Sandra; López-Morago Rodríguez, Claudia; Alemán Aguilera, Inmaculada; Botella López, Miguel Cecilio

    2018-05-01

    Sex determination of unknown individuals is one of the primary goals of Physical and Forensic Anthropology. The adult skeleton can be sexed using both morphological and metric traits on a large number of bones. The human pelvis is often used as an important element of adult sex determination. However, studies carried out about the pelvic bone in subadult individuals present several limitations due the absence of sexually dimorphic characteristics. In this study, we analyse the sexual dimorphism of the immature pubis and ischium bones, attending to their shape (Procrustes residuals) and size (centroid size), using an identified sample of subadult individuals composed of 58 individuals for the pubis and 83 for the ischium, aged between birth and 1year of life, from the Granada osteological collection of identified infants (Granada, Spain). Geometric morphometric methods and discriminant analysis were applied to this study. The results of intra- and inter-observer error showed good and excellent agreement in the location of coordinates of landmarks and semilandmarks, respectively. Principal component analysis performed on shape and size variables showed superposition of the two sexes, suggesting a low degree of sexual dimorphism. Canonical variable analysis did not show significant changes between the male and female shapes. As a consequence, discriminant analysis with leave-one-out cross validation provided low classification accuracy. The results suggested a low degree of sexual dimorphism supported by significant sexual dimorphism in the subadult sample and poor cross-validated classification accuracy. The inclusion of centroid size as a discriminant variable does not imply a significant improvement in the results of the analysis. The similarities found between the sexes prevent consideration of pubic and ischial morphology as a sex estimator in early stages of development. The authors suggest extending this study by analysing the different trajectories of shape and size in later ontogeny between males and females. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.

    PubMed

    Rochon, K; Scoles, G A; Lysyk, T J

    2012-03-01

    A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.

  10. Designing to Save Energy

    ERIC Educational Resources Information Center

    Santamaria, Joseph W.

    1977-01-01

    While tripling the campus size of Alvin Community College in Texas, architects and engineers cut back on nonessential lighting, recaptured waste heat, insulated everything possible, and let energy considerations dictate the size and shape of the building. (Author/MLF)

  11. Simple, Defensible Sample Sizes Based on Cost Efficiency

    PubMed Central

    Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.

    2009-01-01

    Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055

  12. RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.

    PubMed

    Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu

    2018-05-30

    One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.

  13. The special case of the 2 × 2 table: asymptotic unconditional McNemar test can be used to estimate sample size even for analysis based on GEE.

    PubMed

    Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu

    2015-07-01

    Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review.

    PubMed

    McKeown, Andrew; Gewandter, Jennifer S; McDermott, Michael P; Pawlowski, Joseph R; Poli, Joseph J; Rothstein, Daniel; Farrar, John T; Gilron, Ian; Katz, Nathaniel P; Lin, Allison H; Rappaport, Bob A; Rowbotham, Michael C; Turk, Dennis C; Dworkin, Robert H; Smith, Shannon M

    2015-03-01

    Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. Copyright © 2015 American Pain Society. All rights reserved.

  15. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    PubMed

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.

  16. The Comparability of the Standardized Mean Difference Effect Size across Different Measures of the Same Construct: Measurement Considerations

    ERIC Educational Resources Information Center

    Nugent, William R.

    2006-01-01

    One of the most important effect sizes used in meta-analysis is the standardized mean difference (SMD). In this article, the conditions under which SMD effect sizes based on different measures of the same construct are directly comparable are investigated. The results show that SMD effect sizes from different measures of the same construct are…

  17. Adjustable Nyquist-rate System for Single-Bit Sigma-Delta ADC with Alternative FIR Architecture

    NASA Astrophysics Data System (ADS)

    Frick, Vincent; Dadouche, Foudil; Berviller, Hervé

    2016-09-01

    This paper presents a new smart and compact system dedicated to control the output sampling frequency of an analogue-to-digital converters (ADC) based on single-bit sigma-delta (ΣΔ) modulator. This system dramatically improves the spectral analysis capabilities of power network analysers (power meters) by adjusting the ADC's sampling frequency to the input signal's fundamental frequency with a few parts per million accuracy. The trade-off between straightforwardness and performance that motivated the choice of the ADC's architecture are preliminary discussed. It particularly comes along with design considerations of an ultra-steep direct-form FIR that is optimised in terms of size and operating speed. Thanks to compact standard VHDL language description, the architecture of the proposed system is particularly suitable for application-specific integrated circuit (ASIC) implementation-oriented low-power and low-cost power meter applications. Field programmable gate array (FPGA) prototyping and experimental results validate the adjustable sampling frequency concept. They also show that the system can perform better in terms of implementation and power capabilities compared to dedicated IP resources.

  18. Postmortem Cholesterol Levels in Peripheral Nerve Tissue: Preliminar Considerations on Interindividual and Intraindividual Variation.

    PubMed

    Vacchiano, Giuseppe; Luna Maldonado, Aurelio; Matas Ros, Maria; Fiorenza, Elisa; Silvestre, Angela; Simonetti, Biagio; Pieri, Maria

    2018-06-01

    The study reports the evolution of the demyelinization process based on cholesterol ([CHOL]) levels quantified in median nerve samples and collected at different times-from death from both right and left wrists. The statistical data show that the phenomenon evolves differently in the right and left nerves. Such a difference can reasonably be attributed to a different multicenter evolution of the demyelinization. For data analysis, the enrolled subjects were grouped by similar postmortem intervals (PMIs), considering 3 intervals: PMI < 48 hours, 48 hours < PMI < 78 hours, and PMI > 78 hours. Data obtained from tissue dissected within 48 hours of death allowed for a PMI estimation according to the following equations: PMI = 0.000 + 0.7623 [CHOL]right (R = 0.581) for the right wrist and PMI = 0.000 + 0.8911 [CHOL]left (R = 0.794) for the left wrist.At present, this correlation cannot be considered to be definitive because of the limitation of the small size of the samples analyzed, because the differences in the sampling time and the interindividual and intraindividual variation may influence the demyelinization process.

  19. A methodology for the semi-automatic digital image analysis of fragmental impactites

    NASA Astrophysics Data System (ADS)

    Chanou, A.; Osinski, G. R.; Grieve, R. A. F.

    2014-04-01

    A semi-automated digital image analysis method is developed for the comparative textural study of impact melt-bearing breccias. This method uses the freeware software ImageJ developed by the National Institute of Health (NIH). Digital image analysis is performed on scans of hand samples (10-15 cm across), based on macroscopic interpretations of the rock components. All image processing and segmentation are done semi-automatically, with the least possible manual intervention. The areal fraction of components is estimated and modal abundances can be deduced, where the physical optical properties (e.g., contrast, color) of the samples allow it. Other parameters that can be measured include, for example, clast size, clast-preferred orientations, average box-counting dimension or fragment shape complexity, and nearest neighbor distances (NnD). This semi-automated method allows the analysis of a larger number of samples in a relatively short time. Textures, granulometry, and shape descriptors are of considerable importance in rock characterization. The methodology is used to determine the variations of the physical characteristics of some examples of fragmental impactites.

  20. Evaluating sampling designs by computer simulation: A case study with the Missouri bladderpod

    USGS Publications Warehouse

    Morrison, L.W.; Smith, D.R.; Young, C.; Nichols, D.W.

    2008-01-01

    To effectively manage rare populations, accurate monitoring data are critical. Yet many monitoring programs are initiated without careful consideration of whether chosen sampling designs will provide accurate estimates of population parameters. Obtaining accurate estimates is especially difficult when natural variability is high, or limited budgets determine that only a small fraction of the population can be sampled. The Missouri bladderpod, Lesquerella filiformis Rollins, is a federally threatened winter annual that has an aggregated distribution pattern and exhibits dramatic interannual population fluctuations. Using the simulation program SAMPLE, we evaluated five candidate sampling designs appropriate for rare populations, based on 4 years of field data: (1) simple random sampling, (2) adaptive simple random sampling, (3) grid-based systematic sampling, (4) adaptive grid-based systematic sampling, and (5) GIS-based adaptive sampling. We compared the designs based on the precision of density estimates for fixed sample size, cost, and distance traveled. Sampling fraction and cost were the most important factors determining precision of density estimates, and relative design performance changed across the range of sampling fractions. Adaptive designs did not provide uniformly more precise estimates than conventional designs, in part because the spatial distribution of L. filiformis was relatively widespread within the study site. Adaptive designs tended to perform better as sampling fraction increased and when sampling costs, particularly distance traveled, were taken into account. The rate that units occupied by L. filiformis were encountered was higher for adaptive than for conventional designs. Overall, grid-based systematic designs were more efficient and practically implemented than the others. ?? 2008 The Society of Population Ecology and Springer.

  1. A Review of the Hypoglycemic Effects of Five Commonly Used Herbal Food Supplements

    PubMed Central

    Deng, Ruitang

    2013-01-01

    Hyperglycemia is a pathological condition associated with prediabetes and diabetes. The incidence of prediabetes and diabetes is increasing and imposes great burden on healthcare worldwide. Patients with prediabetes and diabetes have significantly increased risk for cardiovascular diseases and other complications. Currently, management of hyperglycemia includes pharmacological interventions, physical exercise, and change of life style and diet. Food supplements have increasingly become attractive alternatives to prevent or treat hyperglycemia, especially for subjects with mild hyperglycemia. This review summarized current patents and patent applications with relevant literature on five commonly used food supplements with claims of hypoglycemic effects, including emblica officinalis (gooseberry), fenugreek, green tea, momordica charantia (bitter melon) and cinnamon. The data from human clinical studies did not support a recommendation for all five supplements to manage hyperglycemia. Fenugreek and composite supplements containing emblica officinalis showed the most consistency in lowering fasting blood sugar (FBS) or glycated hemoglobin (HbA1c) levels in diabetic patients. The hypoglycemic effects of cinnamon and momordica charantia were demonstrated in most of the trials with some exceptions. However, green tea exhibited limited benefits in reducing FBS or HbA1c levels and should not be recommended for managing hyperglycemia. Certain limitations are noticed in a considerable number of clinical studies including small sample size, poor experimental design and considerable variations in participant population, preparation format, daily dose, and treatment duration. Future studies with more defined participants, standardized preparation and dose, and improved trial design and size are warranted. PMID:22329631

  2. Systematic evaluation of deep learning based detection frameworks for aerial imagery

    NASA Astrophysics Data System (ADS)

    Sommer, Lars; Steinmann, Lucas; Schumann, Arne; Beyerer, Jürgen

    2018-04-01

    Object detection in aerial imagery is crucial for many applications in the civil and military domain. In recent years, deep learning based object detection frameworks significantly outperformed conventional approaches based on hand-crafted features on several datasets. However, these detection frameworks are generally designed and optimized for common benchmark datasets, which considerably differ from aerial imagery especially in object sizes. As already demonstrated for Faster R-CNN, several adaptations are necessary to account for these differences. In this work, we adapt several state-of-the-art detection frameworks including Faster R-CNN, R-FCN, and Single Shot MultiBox Detector (SSD) to aerial imagery. We discuss adaptations that mainly improve the detection accuracy of all frameworks in detail. As the output of deeper convolutional layers comprise more semantic information, these layers are generally used in detection frameworks as feature map to locate and classify objects. However, the resolution of these feature maps is insufficient for handling small object instances, which results in an inaccurate localization or incorrect classification of small objects. Furthermore, state-of-the-art detection frameworks perform bounding box regression to predict the exact object location. Therefore, so called anchor or default boxes are used as reference. We demonstrate how an appropriate choice of anchor box sizes can considerably improve detection performance. Furthermore, we evaluate the impact of the performed adaptations on two publicly available datasets to account for various ground sampling distances or differing backgrounds. The presented adaptations can be used as guideline for further datasets or detection frameworks.

  3. On the effect of incremental forming on alpha phase precipitation and mechanical behavior of beta-Ti-10V-2Fe-3Al

    NASA Astrophysics Data System (ADS)

    Winter, S.; F-X Wagner, M.

    2016-03-01

    A combination of good ductility and fatigue resistance makes β-titanium alloys interesting for many current and potential future applications. The mechanical behavior is primarily determined by microstructural parameters like (beta phase) grain size, morphology and volume fraction of primary / secondary α-phase precipitates, and this allows changing and optimizing their mechanical properties across a wide range. In this study, we investigate the possibility to modify the microstructure of the high-strength beta titanium alloy Ti-10V-2Fe-3Al, with a special focus on shape and volume fraction of primary α-phase. In addition to the conventional strategy for precipitation of primary α, a special thermo-mechanical processing is performed; this processing route combines the conventional heat treatment with incremental forming during the primary α-phase annealing. After incremental forming, considerable variations in terms of microstructure and mechanical properties can be obtained for different thermo-mechanical processing routes. The microstructures of the deformed samples are characterized by globular as well as lamellar (bimodal) α precipitates, whereas conventional annealing only results in the formation of lamellar precipitates. Because of the smaller size, and the lower amount, of α-phase after incremental forming, tensile strength is not as high as after the conventional strategy. However, high amounts of grain boundary α and lamellar αp-phase in the undeformed samples lead to a significantly lower ductility in comparison to the matrix with bimodal structures obtained by thermo-mechanical processing. These results illustrate the potential of incremental forming during the annealing to modify the microstructure of the beta titanium Ti-10V-2Fe-3Al in a wide range of volume fractions and morphologies of the primary α phase, which in turn leads to considerably changes, and improved, mechanical properties.

  4. Variation Across U.S. Assisted Living Facilities: Admissions, Resident Care Needs, and Staffing.

    PubMed

    Han, Kihye; Trinkoff, Alison M; Storr, Carla L; Lerner, Nancy; Yang, Bo Kyum

    2017-01-01

    Though more people in the United States currently reside in assisted living facilities (ALFs) than nursing homes, little is known about ALF admission policies, resident care needs, and staffing characteristics. We therefore conducted this study using a nationwide sample of ALFs to examine these factors, along with comparison of ALFs by size. Cross-sectional secondary data analysis using data from the 2010 National Survey of Residential Care Facilities. Measures included nine admission policy items, seven items on the proportion of residents with selected conditions or care needs, and six items on staffing characteristics (e.g., access to licensed nurse, aide training). Facilities (n = 2,301) were divided into three categories by size: small, 4 to 10 beds; medium, 11 to 25 beds; and large, 26 or more beds. Analyses took complex sampling design effects into account to project national U.S. estimates. More than half of ALFs admitted residents with considerable healthcare needs and served populations that required nursing care, such as for transfers, medications, and eating or dressing. Staffing was largely composed of patient care aides, and fewer than half of ALFs had licensed care provider (registered nurse, licensed practical nurse) hours. Smaller facilities tended to have more inclusive admission policies and residents with more complex care needs (more mobility, eating and medication assistance required, short-term memory issues, p < .01) and less access to licensed nurses than larger ALFs (p < .01). This study suggests ALFs are caring for and admitting residents with considerable care needs, indicating potential overlap with nursing home populations. Despite this finding, ALF regulations lag far behind those in effect for nursing homes. In addition, measurement of care outcomes is critically needed to ensure appropriate ALF care quality. As more people choose ALFs, outcome measures for ALFs, which are now unavailable, should be developed to allow for oversight and monitoring of care quality. © 2016 Sigma Theta Tau International.

  5. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

    PubMed Central

    Adnan, Tassha Hilda

    2016-01-01

    Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

  6. Sample size calculations for randomized clinical trials published in anesthesiology journals: a comparison of 2010 versus 2016.

    PubMed

    Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M

    2018-06-01

    Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.

  7. Sizing ocean giants: patterns of intraspecific size variation in marine megafauna

    PubMed Central

    Balk, Meghan A.; Benfield, Mark C.; Branch, Trevor A.; Chen, Catherine; Cosgrove, James; Dove, Alistair D.M.; Gaskins, Lindsay C.; Helm, Rebecca R.; Hochberg, Frederick G.; Lee, Frank B.; Marshall, Andrea; McMurray, Steven E.; Schanche, Caroline; Stone, Shane N.; Thaler, Andrew D.

    2015-01-01

    What are the greatest sizes that the largest marine megafauna obtain? This is a simple question with a difficult and complex answer. Many of the largest-sized species occur in the world’s oceans. For many of these, rarity, remoteness, and quite simply the logistics of measuring these giants has made obtaining accurate size measurements difficult. Inaccurate reports of maximum sizes run rampant through the scientific literature and popular media. Moreover, how intraspecific variation in the body sizes of these animals relates to sex, population structure, the environment, and interactions with humans remains underappreciated. Here, we review and analyze body size for 25 ocean giants ranging across the animal kingdom. For each taxon we document body size for the largest known marine species of several clades. We also analyze intraspecific variation and identify the largest known individuals for each species. Where data allows, we analyze spatial and temporal intraspecific size variation. We also provide allometric scaling equations between different size measurements as resources to other researchers. In some cases, the lack of data prevents us from fully examining these topics and instead we specifically highlight these deficiencies and the barriers that exist for data collection. Overall, we found considerable variability in intraspecific size distributions from strongly left- to strongly right-skewed. We provide several allometric equations that allow for estimation of total lengths and weights from more easily obtained measurements. In several cases, we also quantify considerable geographic variation and decreases in size likely attributed to humans. PMID:25649000

  8. Some Considerations on the Dynamics of Nanometric Suspensions in Fluid Media

    NASA Astrophysics Data System (ADS)

    Lungu, Mihai; Neculae, Adrian; Bunoiu, Madalin

    2009-05-01

    Nano-sized particles received considerable interest in the last decade. The manipulation of nanoparticles is becoming an important issue as they are more and more produced as a result of material synthesis and combustion emission. The nanometric particles represent a very important threat for human health because they can readily enter the human body through inhalation and their toxicity is relatively high due to the large specific surface area. The separation of the nano-sized particles into distinct bands, spatially separated one of each other had also brought recently considerable attention in many scientific areas; the usages of nanoparticles are very promising for the new technologies. The behavior of a suspension of sub-micronic particles under the action of dielectrophoretic force is numerically investigated and a theoretical model is proposed.

  9. Study of magnetic and electrical properties of nanocrystalline Mn doped NiO.

    PubMed

    Raja, S Philip; Venkateswaran, C

    2011-03-01

    Diluted Magnetic Semiconductors (DMS) are intensively explored in recent years for its applications in spintronics, which is expected to revolutionize the present day information technology. Nanocrystalline Mn doped NiO samples were prepared using chemical co-precipitation method with an aim to realize room temperature ferromagnetism. Phase formation of the samples was studied using X-ray diffraction-Rietveld analysis. Scanning electron microscopy and Energy dispersive X-ray analysis results reveal the nanocrystalline nature of the samples, agglomeration of the particles, considerable particle size distribution and the near stoichiometry. Thermomagnetic curves confirm the single-phase formation of the samples up to 1% doping of Mn. Vibrating Sample Magnetometer measurements indicate the absence of ferromagnetism at room temperature. This may be due to the low concentration of Mn2+ ions having weak indirect coupling with Ni2+ ions. The lack of free carriers is also expected to be the reason for the absence of ferromagnetism, which is in agreement with the results of resistivity measurements using impedance spectroscopy. Arrhenius plot shows the presence of two thermally activated regions and the activation energy for the nanocrystalline Mn doped sample was found to be greater than that of undoped NiO. This is attributed to the doping effect of Mn. However, the dielectric constant of the samples was found to be of the same order of magnitude very much comparable with that of undoped NiO.

  10. Application of cluster and discriminant analyses to diagnose lithological heterogeneity of the parent material according to its particle-size distribution

    NASA Astrophysics Data System (ADS)

    Giniyatullin, K. G.; Valeeva, A. A.; Smirnova, E. V.

    2017-08-01

    Particle-size distribution in soddy-podzolic and light gray forest soils of the Botanical Garden of Kazan Federal University has been studied. The cluster analysis of data on the samples from genetic soil horizons attests to the lithological heterogeneity of the profiles of all the studied soils. It is probable that they are developed from the two-layered sediments with the upper colluvial layer underlain by the alluvial layer. According to the discriminant analysis, the major contribution to the discrimination of colluvial and alluvial layers is that of the fraction >0.25 mm. The results of canonical analysis show that there is only one significant discriminant function that separates alluvial and colluvial sediments on the investigated territory. The discriminant function correlates with the contents of fractions 0.05-0.01, 0.25-0.05, and >0.25 mm. Classification functions making it possible to distinguish between alluvial and colluvial sediments have been calculated. Statistical assessment of particle-size distribution data obtained for the plow horizons on ten plowed fields within the garden indicates that this horizon is formed from colluvial sediments. We conclude that the contents of separate fractions and their ratios cannot be used as a universal criterion of the lithological heterogeneity. However, adequate combination of the cluster and discriminant analyses makes it possible to give a comprehensive assessment of the lithology of soil samples from data on the contents of sand and silt fractions, which considerably increases the information value and reliability of the results.

  11. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    PubMed

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  12. Measuring Spray Droplet Size from Agricultural Nozzles Using Laser Diffraction

    PubMed Central

    Fritz, Bradley K.; Hoffmann, W. Clint

    2016-01-01

    When making an application of any crop protection material such as an herbicide or pesticide, the applicator uses a variety of skills and information to make an application so that the material reaches the target site (i.e., plant). Information critical in this process is the droplet size that a particular spray nozzle, spray pressure, and spray solution combination generates, as droplet size greatly influences product efficacy and how the spray moves through the environment. Researchers and product manufacturers commonly use laser diffraction equipment to measure the spray droplet size in laboratory wind tunnels. The work presented here describes methods used in making spray droplet size measurements with laser diffraction equipment for both ground and aerial application scenarios that can be used to ensure inter- and intra-laboratory precision while minimizing sampling bias associated with laser diffraction systems. Maintaining critical measurement distances and concurrent airflow throughout the testing process is key to this precision. Real time data quality analysis is also critical to preventing excess variation in the data or extraneous inclusion of erroneous data. Some limitations of this method include atypical spray nozzles, spray solutions or application conditions that result in spray streams that do not fully atomize within the measurement distances discussed. Successful adaption of this method can provide a highly efficient method for evaluation of the performance of agrochemical spray application nozzles under a variety of operational settings. Also discussed are potential experimental design considerations that can be included to enhance functionality of the data collected. PMID:27684589

  13. Basic numerical competences in large-scale assessment data: Structure and long-term relevance.

    PubMed

    Hirsch, Stefa; Lambert, Katharina; Coppens, Karien; Moeller, Korbinian

    2018-03-01

    Basic numerical competences are seen as building blocks for later numerical and mathematical achievement. The current study aimed at investigating the structure of early numeracy reflected by different basic numerical competences in kindergarten and its predictive value for mathematical achievement 6 years later using data from large-scale assessment. This allowed analyses based on considerably large sample sizes (N > 1700). A confirmatory factor analysis indicated that a model differentiating five basic numerical competences at the end of kindergarten fitted the data better than a one-factor model of early numeracy representing a comprehensive number sense. In addition, these basic numerical competences were observed to reliably predict performance in a curricular mathematics test in Grade 6 even after controlling for influences of general cognitive ability. Thus, our results indicated a differentiated view on early numeracy considering basic numerical competences in kindergarten reflected in large-scale assessment data. Consideration of different basic numerical competences allows for evaluating their specific predictive value for later mathematical achievement but also mathematical learning difficulties. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Effect of Cu2+ substitution on the magnetic properties of co-precipitated Ni-Cu-Zn ferrite nanoparticles

    NASA Astrophysics Data System (ADS)

    Ramakrishna, K. S.; Srinivas, Ch.; Tirupanyam, B. V.; Ramesh, P. N.; Meena, S. S.; Potukuchi, D. M.; Sastry, D. L.

    2017-05-01

    Spinel ferrite nanoparticles with chemical equation NixCu0.1Zn0.9-xFe2O4 (x = 0.5, 0.6, 0.7) have been synthsized using co-precipitation method followed by heat treatment at a temperature of 200 °C for 2h. The results of XRD, FE-SEM and VSM studies are reported. XRD patterns confirm the formation of cubic spinel phase of ferrite samples along with small amount of a secondary phase of α-Fe2O3 whose concentration decreases as Ni2+ concentration increases. The crystallite sizes (in the range of 7.5-13.9 nm) increase and the lattice parameter decreases with increase in Ni2+ ion concentration. These values are comparable to those of NiZn ferrite without Cu substitution. It has been observed that there is a considerable reduction in saturation magnetisation (Ms). This and differences in other magnetic parameters are attributed to considerable changes in cation distribution or core shell interactions of NiZn ferrite with 10 mole% Cu substitution in the place of Zn.

  15. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    PubMed

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Synchrotron-based XRD from rat bone of different age groups.

    PubMed

    Rao, D V; Gigante, G E; Cesareo, R; Brunetti, A; Schiavon, N; Akatsuka, T; Yuasa, T; Takeda, T

    2017-05-01

    Synchrotron-based XRD spectra from rat bone of different age groups (w, 56 w and 78w), lumber vertebra at early stages of bone formation, Calcium hydroxyapatite (HAp) [Ca 10 (PO 4 ) 6 (OH) 2 ] bone fill with varying composition (60% and 70%) and bone cream (35-48%), has been acquired with 15keV synchrotron X-rays. Experiments were performed at Desy, Hamburg, Germany, utilizing the Resonant and Diffraction beamline (P9), with 15keV X-rays (λ=0.82666 A 0 ). Diffraction data were quantitatively analyzed using the Rietveld refinement approach, which allowed us to characterize the structure of these samples in their early stages. Hydroxyapatite, received considerable attention in medical and materials sciences, since these materials are the hard tissues, such as bone and teeth. Higher bioactivity of these samples gained reasonable interest for biological application and for bone tissue repair in oral surgery and orthopedics. The results obtained from these samples, such as phase data, crystalline size of the phases, as well as the degree of crystallinity, confirm the apatite family crystallizing in a hexagonal system, space group P6 3 /m with the lattice parameters of a=9.4328Å and c=6.8842Å (JCPDS card #09-0432). Synchrotron-based XRD patterns are relatively sharp and well resolved and can be attributed to the hexagonal crystal form of hydroxyapatite. All the samples were examined with scanning electron microscope at an accelerating voltage of 15kV. The presence of large globules of different sizes is observed, in small age groups of the rat bone (8w) and lumber vertebra (LV), as distinguished from, large age groups (56 and 78w) in all samples with different magnification, reflects an amorphous phase without significant traces of crystalline phases. Scanning electron microscopy (SEM) was used to characterize the morphology and crystalline properties of Hap, for all the samples, from 2 to 100μm resolution. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Electromagnetic navigation-guided TBNA vs conventional TBNA in the diagnosis of mediastinal lymphadenopathy.

    PubMed

    Diken, Özlem E; Karnak, Demet; Çiledağ, Aydın; Ceyhan, Koray; Atasoy, Çetin; Akyar, Serdar; Kayacan, Oya

    2015-04-01

    Conventional transbronchial needle aspiration (C-TBNA) is a safe method for the diagnosis of hilar and mediastinal lymphadenopathy (MLN). However, diagnostic yield of this technique varies considerably. Electromagnetic navigation bronchoscopy (ENB) is a new technology to increase the diagnostic yield of flexible bronchoscopy for the peripheral lung lesions and MLN. The aim of this prospective study was to compare the diagnostic and sampling success of ENB-guided TBNA (ENB-TBNA) in comparison with C-TBNA while dealing with MLN. Consecutive patients with MLN were randomized into two groups - C-TBNA and ENB-TBNA - using a computer-based number shuffling system to avoid recruitment bias. Procedures were performed in usual fashion, published previously. Ninety-four cases (M/F: 45/49) with a total of 145 stations of MLN were enrolled in the study. In 44 patients, 81 stations were sampled by ENB-TBNA, and in 50 patients 64 stations by C-TBNA. The mean size of MLN in study subjects was 17.56 ± 6.25 mm. The sampling success was significantly higher in ENB-TBNA group (82.7%) compared with C-TBNA group (51.6%) (P < 0.005). Defined by histopathological result, the diagnostic yield in ENB-TBNA was 72.8%, and 42.2% with C-TBNA (P < 0.005). For subcarinal localization, sampling or diagnostic success was higher in ENB-TBNA than that of C-TBNA (P < 0.05). Based on the size of the MLN ≤15 mm or >15 mm, the sampling success of ENB-TBNA was also significantly higher than C-TBNA in both subgroups (P < 0.005 and P < 0.005, respectively). No serious complication was observed. In this study comparing ENB-TBNA and C-TBNA, the sampling and diagnostic success of ENB-TBNA was found to be superior while dealing with MLN, in all categories studied. © 2014 John Wiley & Sons Ltd.

  18. How to Buy School Seating.

    ERIC Educational Resources Information Center

    Summerville, D.G.

    1966-01-01

    An expert tells what kind of furniture you need for the different rooms in your schools. Suggestions are made separately for both elementary and secondary classrooms emphasizing consideration for the student. General considerations are listed regarding durability, floor protection, storage, chair leg finish, wooden vs. fiberglass, size, and…

  19. A hierarchical model for spatial capture-recapture data

    USGS Publications Warehouse

    Royle, J. Andrew; Young, K.V.

    2008-01-01

    Estimating density is a fundamental objective of many animal population studies. Application of methods for estimating population size from ostensibly closed populations is widespread, but ineffective for estimating absolute density because most populations are subject to short-term movements or so-called temporary emigration. This phenomenon invalidates the resulting estimates because the effective sample area is unknown. A number of methods involving the adjustment of estimates based on heuristic considerations are in widespread use. In this paper, a hierarchical model of spatially indexed capture recapture data is proposed for sampling based on area searches of spatial sample units subject to uniform sampling intensity. The hierarchical model contains explicit models for the distribution of individuals and their movements, in addition to an observation model that is conditional on the location of individuals during sampling. Bayesian analysis of the hierarchical model is achieved by the use of data augmentation, which allows for a straightforward implementation in the freely available software WinBUGS. We present results of a simulation study that was carried out to evaluate the operating characteristics of the Bayesian estimator under variable densities and movement patterns of individuals. An application of the model is presented for survey data on the flat-tailed horned lizard (Phrynosoma mcallii) in Arizona, USA.

  20. Preliminary design of a prototype particulate stack sampler. [For stack gas temperature under 300/sup 0/C

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elder, J.C.; Littlefield, L.G.; Tillery, M.I.

    1978-06-01

    A preliminary design of a prototype particulate stack sampler (PPSS) has been prepared, and development of several components is under way. The objective of this Environmental Protection Agency (EPA)-sponsored program is to develop and demonstrate a prototype sampler with capabilities similar to EPA Method 5 apparatus but without some of the more troublesome aspects. Features of the new design include higher sampling flow; display (on demand) of all variables and periodic calculation of percent isokinetic, sample volume, and stack velocity; automatic control of probe and filter heaters; stainless steel surfaces in contact with the sample stream; single-point particle size separationmore » in the probe nozzle; null-probe capability in the nozzle; and lower weight in the components of the sampling train. Design considerations will limit use of the PPSS to stack gas temperatures under approximately 300/sup 0/C, which will exclude sampling some high-temperature stacks such as incinerators. Although need for filter weighing has not been eliminated in the new design, introduction of a variable-slit virtual impactor nozzle may eliminate the need for mass analysis of particles washed from the probe. Component development has shown some promise for continuous humidity measurement by an in-line wet-bulb, dry-bulb psychrometer.« less

  1. Salt in the Air during the Nitrogen, Aerosol Composition, and Halogens on a Tall Tower (NACHTT) Campaign

    NASA Astrophysics Data System (ADS)

    Pszenny, A.; Keene, W. C.; Sander, R.; Bearekman, R.; Deegan, B.; Maben, J. R.; Warrick-Wriston, C.; Young, A.

    2011-12-01

    Bulk and size-segregated aerosol samples were collected 22 m AGL at the Boulder Atmospheric Observatory (40°N, 105°W, 1563 m ASL) from 18 February to 13 March 2011. Total concentrations of Na, Mg, Al, Cl, V, Mn, Br and I in bulk samples were determined by neutron activation analysis. Ionic composition of all size-segregated and a subset of bulk samples was determined by ion chromatography of aqueous extracts. Mg, Al, V and Mn mass concentrations were highly correlated and present in ratios similar to those in Denver area surface soils. Na and Cl were less well correlated with these soil elements but, after correction for soil contributions, highly correlated with each other. Linear regression of non-soil Cl vs. non-soil Na yielded a slope of 1.69 ± 0.09 (95% C.I.; n = 173), a value between the mass ratios of sea salt (1.80) and halite (1.54). The median Na and Cl concentrations (6.8 and 6.6 nmol m-3 STP, respectively) were factors of 25 to 35 less than those typically measured in the marine boundary layer. Br and I were somewhat correlated and appeared to represent a third aerosol component. The average bulk Cl-:total Cl ratio was 0.99 ± 0.03 (n = 44) suggesting that essentially all aerosol chlorine was water-soluble. Na+ and Cl- mass distributions were bimodal with most of the masses (medians 75% and 78%, respectively, n = 45) in supermicrometer particles. Possible origins of the "salt" component will be discussed based on consideration of 5-day HYSPLIT back trajectories and other information on sampled air mass characteristics.

  2. HIV prevention interventions to reduce sexual risk for African Americans: the influence of community-level stigma and psychological processes.

    PubMed

    Reid, Allecia E; Dovidio, John F; Ballester, Estrellita; Johnson, Blair T

    2014-02-01

    Interventions to improve public health may benefit from consideration of how environmental contexts can facilitate or hinder their success. We examined the extent to which efficacy of interventions to improve African Americans' condom use practices was moderated by two indicators of structural stigma-Whites' attitudes toward African Americans and residential segregation in the communities where interventions occurred. A previously published meta-analytic database was re-analyzed to examine the interplay of community-level stigma with the psychological processes implied by intervention content in influencing intervention efficacy. All studies were conducted in the United States and included samples that were at least 50% African American. Whites' attitudes were drawn from the American National Election Studies, which collects data from nationally representative samples. Residential segregation was drawn from published reports. Results showed independent effects of Whites' attitudes and residential segregation on condom use effect sizes. Interventions were most successful when Whites' attitudes were more positive or when residential segregation was low. These two structural factors interacted: Interventions improved condom use only when communities had both relatively positive attitudes toward African Americans and lower levels of segregation. The effect of Whites' attitudes was more pronounced at longer follow-up intervals and for younger samples and those samples with more African Americans. Tailoring content to participants' values and needs, which may reduce African Americans' mistrust of intervention providers, buffered against the negative influence of Whites' attitudes on condom use. The structural factors uniquely accounted for variance in condom use effect sizes over and above intervention-level features and community-level education and poverty. Results highlight the interplay of social identity and environment in perpetuating intergroup disparities. Potential mechanisms for these effects are discussed along with public health implications. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. HIV Prevention Interventions to Reduce Sexual Risk for African Americans: The Influence of Community-Level Stigma and Psychological Processes

    PubMed Central

    Reid, Allecia E.; Dovidio, John F.; Ballester, Estrellita; Johnson, Blair T.

    2013-01-01

    Interventions to improve public health may benefit from consideration of how environmental contexts can facilitate or hinder their success. We examined the extent to which efficacy of interventions to improve African Americans’ condom use practices was moderated by two indicators of structural stigma—Whites’ attitudes toward African Americans and residential segregation in the communities where interventions occurred. A previously published meta-analytic database was re-analyzed to examine the interplay of community-level stigma with the psychological processes implied by intervention content in influencing intervention efficacy. All studies were conducted in the United States and included samples that were at least 50% African American. Whites’ attitudes were drawn from the American National Election Studies, which collects data from nationally representative samples. Residential segregation was drawn from published reports. Results showed independent effects of Whites’ attitudes and residential segregation on condom use effect sizes. Interventions were most successful when Whites’ attitudes were more positive or when residential segregation was low. These two structural factors interacted: Interventions improved condom use only when communities had both relatively positive attitudes toward African Americans and lower levels of segregation. The effect of Whites’ attitudes was more pronounced at longer follow-up intervals and for younger samples and those samples with more African Americans. Tailoring content to participants’ values and needs, which may reduce African Americans’ mistrust of intervention providers, buffered against the negative influence of Whites’ attitudes on condom use. The structural factors uniquely accounted for variance in condom use effect sizes over and above intervention-level features and community-level education and poverty. Results highlight the interplay of social identity and environment in perpetuating intergroup disparities. Potential mechanisms for these effects are discussed along with public health implications. PMID:24507916

  4. Forest Fuels Management in Europe

    Treesearch

    Gavriil Xanthopoulos; David Caballero; Miguel Galante; Daniel Alexandrian; Eric Rigolot; Raffaella Marzano

    2006-01-01

    Current fuel management practices vary considerably between European countries. Topography, forest and forest fuel characteristics, size and compartmentalization of forests, forest management practices, land uses, land ownership, size of properties, legislation, and, of course, tradition, are reasons for these differences.Firebreak construction,...

  5. Effect Size in Efficacy Trials of Women With Decreased Sexual Desire.

    PubMed

    Pyke, Robert E; Clayton, Anita H

    2018-03-22

    Regarding hypoactive sexual desire disorder (HSDD) in women, some reviewers judge the effect size small for medications vs placebo, but substantial for cognitive behavior therapy (CBT) or mindfulness meditation training (MMT) vs wait list. However, we lack comparisons of the effect sizes for the active intervention itself, for the control treatment, and for the differential between the two. For efficacy trials of HSDD in women, compare effect sizes for medications (testosterone/testosterone transdermal system, flibanserin, and bremelanotide) and placebo vs effect sizes for psychotherapy and wait-list control. We conducted a literature search for mean changes and SD on main measures of sexual desire and associated distress in trials of medications, CBT, or MMT. Effect size was used as it measures the magnitude of the intervention without confounding by sample size. Cohen d was used to determine effect sizes. For medications, mean (SD) effect size was 1.0 (0.34); for CBT and MMT, 1.0 (0.36); for placebo, 0.55 (0.16); and for wait list, 0.05 (0.26). Recommendations of psychotherapy over medication for treatment of HSDD are premature and not supported by data on effect sizes. Active participation in treatment conveys considerable non-specific benefits. Caregivers should attend to biological and psychosocial elements, and patient preference, to optimize response. Few clinical trials of psychotherapies were substantial in size or utilized adequate control paradigms. Medications and psychotherapies had similar, large effect sizes. Effect size of placebo was moderate. Effect size of wait-list control was very small, about one quarter that of placebo. Thus, a substantial non-specific therapeutic effect is associated with receiving placebo plus active care and evaluation. The difference in effect size between placebo and wait-list controls distorts the value of the subtraction of effect of the control paradigms to estimate intervention effectiveness. Pyke RE, Clayton AH. Effect Size in Efficacy Trials of Women With Decreased Sexual Desire. Sex Med Rev 2018;XX:XXX-XXX. Copyright © 2018 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.

  6. Sample size determination for estimating antibody seroconversion rate under stable malaria transmission intensity.

    PubMed

    Sepúlveda, Nuno; Drakeley, Chris

    2015-04-03

    In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.

  7. Using known populations of pronghorn to evaluate sampling plans and estimators

    USGS Publications Warehouse

    Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.

    1995-01-01

    Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.

  8. Cryo-comminution of plastic waste.

    PubMed

    Gente, Vincenzo; La Marca, Floriana; Lucci, Federica; Massacci, Paolo; Pani, Eleonora

    2004-01-01

    Recycling of plastics is a big issue in terms of environmental sustainability and of waste management. The development of proper technologies for plastic recycling is recognised as a priority. To achieve this aim, the technologies applied in mineral processing can be adapted to recycling systems. In particular, the improvement of comminution technologies is one of the main actions to improve the quality of recycled plastics. The aim of this work is to point out suitable comminution processes for different types of plastic waste. Laboratory comminution tests have been carried out under different conditions of temperature and sample pre-conditioning adopting as refrigerant agents CO2 and liquid nitrogen. The temperature has been monitored by thermocouples placed in the milling chamber. Also different internal mill screens have been adopted. A proper procedure has been set up in order to obtain a selective comminution and a size reduction suitable for further separation treatment. Tests have been performed on plastics coming from medical plastic waste and from a plant for spent lead batteries recycling. Results coming from different mill devices have been compared taking into consideration different indexes for representative size distributions. The results of the performed tests show as cryo-comminution improves the effectiveness of size reduction of plastics, promotes liberation of constituents and increases specific surface size of comminuted particles in comparison to a comminution process carried out at room temperature. Copyright 2004 Elsevier Ltd.

  9. Class Size Effects on Reading Achievement Using PIRLS Data: Evidence from Greece

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros; Traynor, Anne

    2014-01-01

    Background/Context: The effects of class size on student achievement have gained considerable attention in education research and policy, especially over the last 30 years. Perhaps the best evidence about the effects of class size thus far has been produced from analyses of Project STAR data, a large-scale experiment where students and teachers…

  10. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    PubMed

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  11. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    PubMed

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.

  12. Analyzing thematic maps and mapping for accuracy

    USGS Publications Warehouse

    Rosenfield, G.H.

    1982-01-01

    Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by either the row totals or the column totals from the original classification error matrices. In hypothesis testing, when the results of tests of multiple sample cases prove to be significant, some form of statistical test must be used to separate any results that differ significantly from the others. In the past, many analyses of the data in this error matrix were made by comparing the relative magnitudes of the percentage of correct classifications, for either individual categories, the entire map or both. More rigorous analyses have used data transformations and (or) two-way classification analysis of variance. A more sophisticated step of data analysis techniques would be to use the entire classification error matrices using the methods of discrete multivariate analysis or of multiviariate analysis of variance.

  13. Microbiological testing of Skylab foods.

    NASA Technical Reports Server (NTRS)

    Heidelbaugh, N. D.; Mcqueen, J. L.; Rowley, D. B.; Powers , E. M.; Bourland, C. T.

    1973-01-01

    Review of some of the unique food microbiology problems and problem-generating circumstances the Skylab manned space flight program involves. The situations these problems arise from include: extended storage times, variations in storage temperatures, no opportunity to resupply or change foods after launch of the Skylab Workshop, first use of frozen foods in space, first use of a food-warming device in weightlessness, relatively small size of production lots requiring statistically valid sampling plans, and use of food as an accurately controlled part in a set of sophisticated life science experiments. Consideration of all of these situations produced the need for definite microbiological tests and test limits. These tests are described along with the rationale for their selection. Reported test results show good compliance with the test limits.

  14. Carbon nanotubes: properties, synthesis, purification, and medical applications

    PubMed Central

    2014-01-01

    Current discoveries of different forms of carbon nanostructures have motivated research on their applications in various fields. They hold promise for applications in medicine, gene, and drug delivery areas. Many different production methods for carbon nanotubes (CNTs) have been introduced; functionalization, filling, doping, and chemical modification have been achieved, and characterization, separation, and manipulation of individual CNTs are now possible. Parameters such as structure, surface area, surface charge, size distribution, surface chemistry, and agglomeration state as well as purity of the samples have considerable impact on the reactivity of carbon nanotubes. Otherwise, the strength and flexibility of carbon nanotubes make them of potential use in controlling other nanoscale structures, which suggests they will have a significant role in nanotechnology engineering. PMID:25170330

  15. Demoralization and attitudes toward residents among certified nurse assistants in relation to job stressors and work resources: cultural diversity in long term care.

    PubMed

    Ramirez, Mildred; Teresi, Jeanne; Holmes, Douglas

    2006-01-01

    Certified Nurse Assistants (CNAs) (n=104) caring for a probability sample of residents in 22 New York State nursing homes were interviewed, longitudinally, regarding work demands and stressors, support and training, and job-stress outcomes. Twenty-seven percent of CNAs reported pejorative name-calling by their residents. Hierarchical regression analyses showed that (a) increase in perceived pressure to complete tasks, (b) assignment size, and (c) attendance at support groups were associated with CNAs' demoralization at follow-up. A decrease in perceived racism and increased in-services about confused residents contributed to more positive attitudes toward residents. Examination of the quality of long-term care should include consideration of cultural diversity.

  16. A method of bias correction for maximal reliability with dichotomous measures.

    PubMed

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  17. Learning class descriptions from a data base of spectral reflectance of soil samples

    NASA Technical Reports Server (NTRS)

    Kimes, D. S.; Irons, J. R.; Levine, E. R.; Horning, N. A.

    1993-01-01

    Consideration is given to a program developed to learn class descriptions from positive and negative training examples of spectral reflectance data of bare soils. It is a combination of 'learning by example' and the generate-and-test paradigm and is designed to provide a robust learning environment that can handle error-prone data. The program was tested by having it learn class descriptions of various categories of organic carbon content, iron oxide content, and particle size distribution in soils. These class descriptions were then used to classify an array of targets. The program found the sequence of relationships between bands that contained the most important information to distinguish the classes. Physical explanations for the class descriptions obtained are presented.

  18. Carbon nanotubes: properties, synthesis, purification, and medical applications

    NASA Astrophysics Data System (ADS)

    Eatemadi, Ali; Daraee, Hadis; Karimkhanloo, Hamzeh; Kouhi, Mohammad; Zarghami, Nosratollah; Akbarzadeh, Abolfazl; Abasi, Mozhgan; Hanifehpour, Younes; Joo, Sang Woo

    2014-08-01

    Current discoveries of different forms of carbon nanostructures have motivated research on their applications in various fields. They hold promise for applications in medicine, gene, and drug delivery areas. Many different production methods for carbon nanotubes (CNTs) have been introduced; functionalization, filling, doping, and chemical modification have been achieved, and characterization, separation, and manipulation of individual CNTs are now possible. Parameters such as structure, surface area, surface charge, size distribution, surface chemistry, and agglomeration state as well as purity of the samples have considerable impact on the reactivity of carbon nanotubes. Otherwise, the strength and flexibility of carbon nanotubes make them of potential use in controlling other nanoscale structures, which suggests they will have a significant role in nanotechnology engineering.

  19. The phenomenon of voltage controlled switching in disordered superconductors.

    PubMed

    Ghosh, Sanjib; De Munshi, D

    2014-01-15

    The superconductor-to-insulator transition (SIT) is a phenomenon occurring in highly disordered superconductors and may be useful in the development of superconducting switches. The SIT has been demonstrated to be induced by different external parameters: temperature, magnetic field, electric field, etc. However, the electric field induced SIT (ESIT), which has been experimentally demonstrated for some specific materials, holds particular promise for practical device development. Here, we demonstrate, from theoretical considerations, the occurrence of the ESIT. We also propose a general switching device architecture using the ESIT and study some of its universal behavior, such as the effects of sample size, disorder strength and temperature on the switching action. This work provides a general framework for the development of such a device.

  20. Carbon nanotubes: properties, synthesis, purification, and medical applications.

    PubMed

    Eatemadi, Ali; Daraee, Hadis; Karimkhanloo, Hamzeh; Kouhi, Mohammad; Zarghami, Nosratollah; Akbarzadeh, Abolfazl; Abasi, Mozhgan; Hanifehpour, Younes; Joo, Sang Woo

    2014-01-01

    Current discoveries of different forms of carbon nanostructures have motivated research on their applications in various fields. They hold promise for applications in medicine, gene, and drug delivery areas. Many different production methods for carbon nanotubes (CNTs) have been introduced; functionalization, filling, doping, and chemical modification have been achieved, and characterization, separation, and manipulation of individual CNTs are now possible. Parameters such as structure, surface area, surface charge, size distribution, surface chemistry, and agglomeration state as well as purity of the samples have considerable impact on the reactivity of carbon nanotubes. Otherwise, the strength and flexibility of carbon nanotubes make them of potential use in controlling other nanoscale structures, which suggests they will have a significant role in nanotechnology engineering.

  1. Concentrations of selected constituents in surface-water and streambed-sediment samples collected from streams in and near an area of oil and natural-gas development, south-central Texas, 2011-13

    USGS Publications Warehouse

    Opsahl, Stephen P.; Crow, Cassi L.

    2014-01-01

    During collection of streambed-sediment samples, additional samples from a subset of three sites (the SAR Elmendorf, SAR 72, and SAR McFaddin sites) were processed by using a 63-µm sieve on one aliquot and a 2-mm sieve on a second aliquot for PAH and n-alkane analyses. The purpose of analyzing PAHs and n-alkanes on a sample containing sand, silt, and clay versus a sample containing only silt and clay was to provide data that could be used to determine if these organic constituents had a greater affinity for silt- and clay-sized particles relative to sand-sized particles. The greater concentrations of PAHs in the <63-μm size-fraction samples at all three of these sites are consistent with a greater percentage of binding sites associated with fine-grained (<63 μm) sediment versus coarse-grained (<2 mm) sediment. The larger difference in total PAHs between the <2-mm and <63-μm size-fraction samples at the SAR Elmendorf site might be related to the large percentage of sand in the <2-mm size-fraction sample which was absent in the <63-μm size-fraction sample. In contrast, the <2-mm size-fraction sample collected from the SAR McFaddin site contained very little sand and was similar in particle-size composition to the <63-μm size-fraction sample.

  2. HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE

    NASA Technical Reports Server (NTRS)

    De, Salvo L. J.

    1994-01-01

    HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.

  3. Cobble cam: Grain-size measurements of sand to boulder from digital photographs and autocorrelation analyses

    USGS Publications Warehouse

    Warrick, J.A.; Rubin, D.M.; Ruggiero, P.; Harney, J.N.; Draut, A.E.; Buscombe, D.

    2009-01-01

    A new application of the autocorrelation grain size analysis technique for mixed to coarse sediment settings has been investigated. Photographs of sand- to boulder-sized sediment along the Elwha River delta beach were taken from approximately 1??2 m above the ground surface, and detailed grain size measurements were made from 32 of these sites for calibration and validation. Digital photographs were found to provide accurate estimates of the long and intermediate axes of the surface sediment (r2 > 0??98), but poor estimates of the short axes (r2 = 0??68), suggesting that these short axes were naturally oriented in the vertical dimension. The autocorrelation method was successfully applied resulting in total irreducible error of 14% over a range of mean grain sizes of 1 to 200 mm. Compared with reported edge and object-detection results, it is noted that the autocorrelation method presented here has lower error and can be applied to a much broader range of mean grain sizes without altering the physical set-up of the camera (~200-fold versus ~6-fold). The approach is considerably less sensitive to lighting conditions than object-detection methods, although autocorrelation estimates do improve when measures are taken to shade sediments from direct sunlight. The effects of wet and dry conditions are also evaluated and discussed. The technique provides an estimate of grain size sorting from the easily calculated autocorrelation standard error, which is correlated with the graphical standard deviation at an r2 of 0??69. The technique is transferable to other sites when calibrated with linear corrections based on photo-based measurements, as shown by excellent grain-size analysis results (r2 = 0??97, irreducible error = 16%) from samples from the mixed grain size beaches of Kachemak Bay, Alaska. Thus, a method has been developed to measure mean grain size and sorting properties of coarse sediments. ?? 2009 John Wiley & Sons, Ltd.

  4. Increased body size along urbanization gradients at both community and intraspecific level in macro-moths.

    PubMed

    Merckx, Thomas; Kaiser, Aurélien; Van Dyck, Hans

    2018-05-23

    Urbanization involves a cocktail of human-induced rapid environmental changes and is forecasted to gain further importance. Urban-heat-island effects result in increased metabolic costs expected to drive shifts towards smaller body sizes. However, urban environments are also characterized by strong habitat fragmentation, often selecting for dispersal phenotypes. Here, we investigate to what extent, and at which spatial scale(s), urbanization drives body size shifts in macro-moths-an insect group characterized by positive size-dispersal links-at both the community and intraspecific level. Using light and bait trapping as part of a replicated, spatially nested sampling design, we show that despite the observed urban warming of their woodland habitat, macro-moth communities display considerable increases in community-weighted mean body size because of stronger filtering against small species along urbanization gradients. Urbanization drives intraspecific shifts towards increased body size too, at least for a third of species analysed. These results indicate that urbanization drives shifts towards larger, and hence, more mobile species and individuals in order to mitigate low connectivity of ecological resources in urban settings. Macro-moths are a key group within terrestrial ecosystems, and since body size is central to species interactions, such urbanization-driven phenotypic change may impact urban ecosystem functioning, especially in terms of nocturnal pollination and food web dynamics. Although we show that urbanization's size-biased filtering happens simultaneously and coherently at both the inter- and intraspecific level, we demonstrate that the impact at the community level is most pronounced at the 800 m radius scale, whereas species-specific size increases happen at local and landscape scales (50-3,200 m radius), depending on the species. Hence, measures-such as creating and improving urban green infrastructure-to mitigate the effects of urbanization on body size will have to be implemented at multiple spatial scales in order to be most effective. © 2018 John Wiley & Sons Ltd.

  5. Sample design effects in landscape genetics

    USGS Publications Warehouse

    Oyler-McCance, Sara J.; Fedy, Bradley C.; Landguth, Erin L.

    2012-01-01

    An important research gap in landscape genetics is the impact of different field sampling designs on the ability to detect the effects of landscape pattern on gene flow. We evaluated how five different sampling regimes (random, linear, systematic, cluster, and single study site) affected the probability of correctly identifying the generating landscape process of population structure. Sampling regimes were chosen to represent a suite of designs common in field studies. We used genetic data generated from a spatially-explicit, individual-based program and simulated gene flow in a continuous population across a landscape with gradual spatial changes in resistance to movement. Additionally, we evaluated the sampling regimes using realistic and obtainable number of loci (10 and 20), number of alleles per locus (5 and 10), number of individuals sampled (10-300), and generational time after the landscape was introduced (20 and 400). For a simulated continuously distributed species, we found that random, linear, and systematic sampling regimes performed well with high sample sizes (>200), levels of polymorphism (10 alleles per locus), and number of molecular markers (20). The cluster and single study site sampling regimes were not able to correctly identify the generating process under any conditions and thus, are not advisable strategies for scenarios similar to our simulations. Our research emphasizes the importance of sampling data at ecologically appropriate spatial and temporal scales and suggests careful consideration for sampling near landscape components that are likely to most influence the genetic structure of the species. In addition, simulating sampling designs a priori could help guide filed data collection efforts.

  6. Study samples are too small to produce sufficiently precise reliability coefficients.

    PubMed

    Charter, Richard A

    2003-04-01

    In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.

  7. Analysis of Sample Size, Counting Time, and Plot Size from an Avian Point Count Survey on Hoosier National Forest, Indiana

    Treesearch

    Frank R. Thompson; Monica J. Schwalbach

    1995-01-01

    We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...

  8. 7 CFR 51.1406 - Sample for grade or size determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., AND STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Sample for Grade Or Size Determination § 51.1406 Sample for grade or size determination. Each sample shall consist of 100 pecans. The...

  9. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    PubMed

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  10. Impact of intermittent fasting on the lipid profile: Assessment associated with diet and weight loss.

    PubMed

    Santos, Heitor O; Macedo, Rodrigo C O

    2018-04-01

    Intermittent fasting, whose proposed benefits include the improvement of lipid profile and the body weight loss, has gained considerable scientific and popular repercussion. This review aimed to consolidate studies that analyzed the lipid profile in humans before and after intermittent fasting period through a detailed review; and to propose the physiological mechanism, considering the diet and the body weight loss. Normocaloric and hypocaloric intermittent fasting may be a dietary method to aid in the improvement of the lipid profile in healthy, obese and dyslipidemic men and women by reducing total cholesterol, LDL, triglycerides and increasing HDL levels. However, the majority of studies that analyze the intermittent fasting impacts on the lipid profile and body weight loss are observational based on Ramadan fasting, which lacks large sample and detailed information about diet. Randomized clinical trials with a larger sample size are needed to evaluate the IF effects mainly in patients with dyslipidemia. Copyright © 2018 European Society for Clinical Nutrition and Metabolism. Published by Elsevier Ltd. All rights reserved.

  11. Peptidic Macrocycles - Conformational Sampling and Thermodynamic Characterization

    PubMed Central

    2018-01-01

    Macrocycles are of considerable interest as highly specific drug candidates, yet they challenge standard conformer generators with their large number of rotatable bonds and conformational restrictions. Here, we present a molecular dynamics-based routine that bypasses current limitations in conformational sampling and extensively profiles the free energy landscape of peptidic macrocycles in solution. We perform accelerated molecular dynamics simulations to capture a diverse conformational ensemble. By applying an energetic cutoff, followed by geometric clustering, we demonstrate the striking robustness and efficiency of the approach in identifying highly populated conformational states of cyclic peptides. The resulting structural and thermodynamic information is benchmarked against interproton distances from NMR experiments and conformational states identified by X-ray crystallography. Using three different model systems of varying size and flexibility, we show that the method reliably reproduces experimentally determined structural ensembles and is capable of identifying key conformational states that include the bioactive conformation. Thus, the described approach is a robust method to generate conformations of peptidic macrocycles and holds promise for structure-based drug design. PMID:29652495

  12. Peptidic Macrocycles - Conformational Sampling and Thermodynamic Characterization.

    PubMed

    Kamenik, Anna S; Lessel, Uta; Fuchs, Julian E; Fox, Thomas; Liedl, Klaus R

    2018-05-29

    Macrocycles are of considerable interest as highly specific drug candidates, yet they challenge standard conformer generators with their large number of rotatable bonds and conformational restrictions. Here, we present a molecular dynamics-based routine that bypasses current limitations in conformational sampling and extensively profiles the free energy landscape of peptidic macrocycles in solution. We perform accelerated molecular dynamics simulations to capture a diverse conformational ensemble. By applying an energetic cutoff, followed by geometric clustering, we demonstrate the striking robustness and efficiency of the approach in identifying highly populated conformational states of cyclic peptides. The resulting structural and thermodynamic information is benchmarked against interproton distances from NMR experiments and conformational states identified by X-ray crystallography. Using three different model systems of varying size and flexibility, we show that the method reliably reproduces experimentally determined structural ensembles and is capable of identifying key conformational states that include the bioactive conformation. Thus, the described approach is a robust method to generate conformations of peptidic macrocycles and holds promise for structure-based drug design.

  13. Conodont (U Th)/He thermochronology: Initial results, potential, and problems

    NASA Astrophysics Data System (ADS)

    Peppe, Daniel J.; Reiners, Peter W.

    2007-06-01

    We performed He diffusion experiments and (U-Th)/He age determinations on conodonts from a variety of locations to explore the potential of conodont (U-Th)/He thermochronology to constrain thermal and exhumation histories of some sedimentary-rock dominated terrains. Based on two diffusion experiments and age results from some specimens, He diffusion in conodont elements appears to be similar to that in Durango apatite fragments of similar size, and closure temperatures are approximately 60-70 °C (for cooling rates of ˜ 10 °C/m.y.). (U-Th)/He ages of conodonts from some locations yield reproducible ages consistent with regional thermal history constraints and, in at least two cases, require a closure temperature lower than ˜ 80 °C. Other samples however, yield irreproducible ages, and in one case yield ages much younger than expected based on regional geologic considerations. These irreproducible samples show inverse correlations between parent nuclides and age consistent with late-stage open-system U-Th behavior.

  14. Population structure and connectivity of tiger sharks (Galeocerdo cuvier) across the Indo-Pacific Ocean basin

    PubMed Central

    Williams, Samuel M.; Otway, Nicholas M.; Nielsen, Einar E.; Maher, Safia L.; Bennett, Mike B.; Ovenden, Jennifer R.

    2017-01-01

    Population genetic structure using nine polymorphic nuclear microsatellite loci was assessed for the tiger shark (Galeocerdo cuvier) at seven locations across the Indo-Pacific, and one location in the southern Atlantic. Genetic analyses revealed considerable genetic structuring (FST > 0.14, p < 0.001) between all Indo-Pacific locations and Brazil. By contrast, no significant genetic differences were observed between locations from within the Pacific or Indian Oceans, identifying an apparent large, single Indo-Pacific population. A lack of differentiation between tiger sharks sampled in Hawaii and other Indo-Pacific locations identified herein is in contrast to an earlier global tiger shark nDNA study. The results of our power analysis provide evidence to suggest that the larger sample sizes used here negated any weak population subdivision observed previously. These results further highlight the need for cross-jurisdictional efforts to manage the sustainable exploitation of large migratory sharks like G. cuvier. PMID:28791159

  15. Population structure and connectivity of tiger sharks (Galeocerdo cuvier) across the Indo-Pacific Ocean basin.

    PubMed

    Holmes, Bonnie J; Williams, Samuel M; Otway, Nicholas M; Nielsen, Einar E; Maher, Safia L; Bennett, Mike B; Ovenden, Jennifer R

    2017-07-01

    Population genetic structure using nine polymorphic nuclear microsatellite loci was assessed for the tiger shark ( Galeocerdo cuvier ) at seven locations across the Indo-Pacific, and one location in the southern Atlantic. Genetic analyses revealed considerable genetic structuring ( F ST  > 0.14, p  < 0.001) between all Indo-Pacific locations and Brazil. By contrast, no significant genetic differences were observed between locations from within the Pacific or Indian Oceans, identifying an apparent large, single Indo-Pacific population. A lack of differentiation between tiger sharks sampled in Hawaii and other Indo-Pacific locations identified herein is in contrast to an earlier global tiger shark nDNA study. The results of our power analysis provide evidence to suggest that the larger sample sizes used here negated any weak population subdivision observed previously. These results further highlight the need for cross-jurisdictional efforts to manage the sustainable exploitation of large migratory sharks like G. cuvier .

  16. Distribution of the two-sample t-test statistic following blinded sample size re-estimation.

    PubMed

    Lu, Kaifeng

    2016-05-01

    We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. ENHANCEMENT OF LEARNING ON SAMPLE SIZE CALCULATION WITH A SMARTPHONE APPLICATION: A CLUSTER-RANDOMIZED CONTROLLED TRIAL.

    PubMed

    Ngamjarus, Chetta; Chongsuvivatwong, Virasakdi; McNeil, Edward; Holling, Heinz

    2017-01-01

    Sample size determination usually is taught based on theory and is difficult to understand. Using a smartphone application to teach sample size calculation ought to be more attractive to students than using lectures only. This study compared levels of understanding of sample size calculations for research studies between participants attending a lecture only versus lecture combined with using a smartphone application to calculate sample sizes, to explore factors affecting level of post-test score after training sample size calculation, and to investigate participants’ attitude toward a sample size application. A cluster-randomized controlled trial involving a number of health institutes in Thailand was carried out from October 2014 to March 2015. A total of 673 professional participants were enrolled and randomly allocated to one of two groups, namely, 341 participants in 10 workshops to control group and 332 participants in 9 workshops to intervention group. Lectures on sample size calculation were given in the control group, while lectures using a smartphone application were supplied to the test group. Participants in the intervention group had better learning of sample size calculation (2.7 points out of maximnum 10 points, 95% CI: 24 - 2.9) than the participants in the control group (1.6 points, 95% CI: 1.4 - 1.8). Participants doing research projects had a higher post-test score than those who did not have a plan to conduct research projects (0.9 point, 95% CI: 0.5 - 1.4). The majority of the participants had a positive attitude towards the use of smartphone application for learning sample size calculation.

  18. 76 FR 23335 - Wilderness Stewardship Plan/Environmental Impact Statement, Sequoia and Kings Canyon National...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-26

    ... planning and environmental impact analysis process required to inform consideration of alternative... 5, 1996. Based on an analysis of the numerous scoping comments received, and with consideration of a... proper food storage; party size; camping and campsites; human waste management; stock use; meadow...

  19. Protocol for determining bull trout presence

    USGS Publications Warehouse

    Peterson, James; Dunham, Jason B.; Howell, Philip; Thurow, Russell; Bonar, Scott

    2002-01-01

    The Western Division of the American Fisheries Society was requested to develop protocols for determining presence/absence and potential habitat suitability for bull trout. The general approach adopted is similar to the process for the marbled murrelet, whereby interim guidelines are initially used, and the protocols are subsequently refined as data are collected. Current data were considered inadequate to precisely identify suitable habitat but could be useful in stratifying sampling units for presence/absence surveys. The presence/absence protocol builds on previous approaches (Hillman and Platts 1993; Bonar et al. 1997), except it uses the variation in observed bull trout densities instead of a minimum threshold density and adjusts for measured differences in sampling efficiency due to gear types and habitat characteristics. The protocol consists of: 1. recommended sample sizes with 80% and 95% detection probabilities for juvenile and resident adult bull trout for day and night snorkeling and electrofishing adjusted for varying habitat characteristics for 50m and 100m sampling units, 2. sampling design considerations, including possible habitat characteristics for stratification, 3. habitat variables to be measured in the sampling units, and 3. guidelines for training sampling crews. Criteria for habitat strata consist of coarse, watershed-scale characteristics (e.g., mean annual air temperature) and fine-scale, reach and habitat-specific features (e.g., water temperature, channel width). The protocols will be revised in the future using data from ongoing presence/absence surveys, additional research on sampling efficiencies, and development of models of habitat/species occurrence.

  20. Phenotypic Association Analyses With Copy Number Variation in Recurrent Depressive Disorder.

    PubMed

    Rucker, James J H; Tansey, Katherine E; Rivera, Margarita; Pinto, Dalila; Cohen-Woods, Sarah; Uher, Rudolf; Aitchison, Katherine J; Craddock, Nick; Owen, Michael J; Jones, Lisa; Jones, Ian; Korszun, Ania; Barnes, Michael R; Preisig, Martin; Mors, Ole; Maier, Wolfgang; Rice, John; Rietschel, Marcella; Holsboer, Florian; Farmer, Anne E; Craig, Ian W; Scherer, Stephen W; McGuffin, Peter; Breen, Gerome

    2016-02-15

    Defining the molecular genomic basis of the likelihood of developing depressive disorder is a considerable challenge. We previously associated rare, exonic deletion copy number variants (CNV) with recurrent depressive disorder (RDD). Sex chromosome abnormalities also have been observed to co-occur with RDD. In this reanalysis of our RDD dataset (N = 3106 cases; 459 screened control samples and 2699 population control samples), we further investigated the role of larger CNVs and chromosomal abnormalities in RDD and performed association analyses with clinical data derived from this dataset. We found an enrichment of Turner's syndrome among cases of depression compared with the frequency observed in a large population sample (N = 34,910) of live-born infants collected in Denmark (two-sided p = .023, odds ratio = 7.76 [95% confidence interval = 1.79-33.6]), a case of diploid/triploid mosaicism, and several cases of uniparental isodisomy. In contrast to our previous analysis, large deletion CNVs were no more frequent in cases than control samples, although deletion CNVs in cases contained more genes than control samples (two-sided p = .0002). After statistical correction for multiple comparisons, our data do not support a substantial role for CNVs in RDD, although (as has been observed in similar samples) occasional cases may harbor large variants with etiological significance. Genetic pleiotropy and sample heterogeneity suggest that very large sample sizes are required to study conclusively the role of genetic variation in mood disorders. Copyright © 2016 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  1. Exploration of time-course combinations of outcome scales for use in a global test of stroke recovery.

    PubMed

    Goldie, Fraser C; Fulton, Rachael L; Dawson, Jesse; Bluhmki, Erich; Lees, Kennedy R

    2014-08-01

    Clinical trials for acute ischemic stroke treatment require large numbers of participants and are expensive to conduct. Methods that enhance statistical power are therefore desirable. We explored whether this can be achieved by a measure incorporating both early and late measures of outcome (e.g. seven-day NIH Stroke Scale combined with 90-day modified Rankin scale). We analyzed sensitivity to treatment effect, using proportional odds logistic regression for ordinal scales and generalized estimating equation method for global outcomes, with all analyses adjusted for baseline severity and age. We ran simulations to assess relations between sample size and power for ordinal scales and corresponding global outcomes. We used R version 2·12·1 (R Development Core Team. R Foundation for Statistical Computing, Vienna, Austria) for simulations and SAS 9·2 (SAS Institute Inc., Cary, NC, USA) for all other analyses. Each scale considered for combination was sensitive to treatment effect in isolation. The mRS90 and NIHSS90 had adjusted odds ratio of 1·56 and 1·62, respectively. Adjusted odds ratio for global outcomes of the combination of mRS90 with NIHSS7 and NIHSS90 with NIHSS7 were 1·69 and 1·73, respectively. The smallest sample sizes required to generate statistical power ≥80% for mRS90, NIHSS7, and global outcomes of mRS90 and NIHSS7 combined and NIHSS90 and NIHSS7 combined were 500, 490, 400, and 380, respectively. When data concerning both early and late outcomes are combined into a global measure, there is increased sensitivity to treatment effect compared with solitary ordinal scales. This delivers a 20% reduction in required sample size at 80% power. Combining early with late outcomes merits further consideration. © 2013 The Authors. International Journal of Stroke © 2013 World Stroke Organization.

  2. Conceptual design considerations and neutronics of lithium fall laser fusion target chambers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meier, W.R.; Thomson, W.B.

    1978-05-31

    Atomics International and Lawrence Livermore Laboratory are involved in the conceptual design of a laser fusion power plant incorporating the lithium fall target chamber. In this paper we discuss some of the more important design considerations for the target chamber and evaluate its nuclear performance. Sizing and configuration of the fall, hydraulic effects, and mechanical design considerations are addressed. The nuclear aspects examined include tritium breeding, energy deposition, and radiation damage.

  3. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  4. Sample Size Determination for Regression Models Using Monte Carlo Methods in R

    ERIC Educational Resources Information Center

    Beaujean, A. Alexander

    2014-01-01

    A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…

  5. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    PubMed

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Entry System Design Considerations for Mars Landers

    NASA Technical Reports Server (NTRS)

    Lockwood, Mary Kae; Powell, Richard W.; Graves, Claude A.; Carman, Gilbert L.

    2001-01-01

    The objective for the next generation or Mars landers is to enable a safe landing at specific locations of scientific interest. The 1st generation entry, descent and landing systems, ex. Viking and Pathfinder, provided successful landing on Mars but by design were limited to large scale, 100s of km, landing sites with minimal local hazards. The 2 nd generation landers, or smart landers, will provide scientists with access to previously unachievable landing sites by providing precision landing to less than 10 km of a target landing site, with the ability to perform local hazard avoidance, and provide hazard tolerance. This 2nd generation EDL system can be utilized for a range of robotic missions with vehicles sized for science payloads from the small 25-70 kg, Viking, Pathfinder, Mars Polar Lander and Mars Exploration Rover-class, to the large robotic Mars Sample Return, 300 kg plus, science payloads. The 2nd generation system can also be extended to a 3nd generation EDL system with pinpoint landing, 10's of meters of landing accuracy, for more capable robotic or human missions. This paper will describe the design considerations for 2nd generation landers. These landers are currently being developed by a consortium of NASA centers, government agencies, industry and academic institutions. The extension of this system and additional considerations required for a 3nd generation human mission to Mars will be described.

  7. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  8. Lead and Arsenic Bioaccessibility and Speciation as a Function of Soil Particle Size

    EPA Science Inventory

    Bioavailability research of soil metals has advanced considerably from default values to validated in vitro bioaccessibility (IVBA) assays for site-specific risk assessment. Previously, USEPA determined that the soil-size fraction representative of dermal adherence and consequent...

  9. Using specific volume increment (SVI) for quantifying growth responses in trees - theoretical and practical considerations

    Treesearch

    Eddie Bevilacqua

    2002-01-01

    Comparative analysis of growth responses among trees following natural or anthropogenic disturbances is often confounded when comparing trees of different size because of the high correlation between growth and initial tree size: large trees tend to have higher absolute grow rates. Relative growth rate (RGR) may not be the most suitable size-dependent measure of growth...

  10. Non-destructive identification of unknown minor phases in polycrystalline bulk alloys using three-dimensional X-ray diffraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yiming, E-mail: yangyiming1988@outlook.com

    Minor phases make considerable contributions to the mechanical and physical properties of metals and alloys. Unfortunately, it is difficult to identify unknown minor phases in a bulk polycrystalline material using conventional metallographic methods. Here, a non-destructive method based on three-dimensional X-ray diffraction (3DXRD) is developed to solve this problem. Simulation results demonstrate that this method is simultaneously able to identify minor phase grains and reveal their positions, orientations and sizes within bulk alloys. According to systematic simulations, the 3DXRD method is practicable for an extensive sample set, including polycrystalline alloys with hexagonal, orthorhombic and cubic minor phases. Experiments were alsomore » conducted to confirm the simulation results. The results for a bulk sample of aluminum alloy AA6061 show that the crystal grains of an unexpected γ-Fe (austenite) phase can be identified, three-dimensionally and nondestructively. Therefore, we conclude that the 3DXRD method is a powerful tool for the identification of unknown minor phases in bulk alloys belonging to a variety of crystal systems. This method also has the potential to be used for in situ observations of the effects of minor phases on the crystallographic behaviors of alloys. - Highlights: •A method based on 3DXRD is developed for identification of unknown minor phase. •Grain position, orientation and size, is simultaneously acquired. •A systematic simulation demonstrated the applicability of the proposed method. •Experimental results on a AA6061 sample confirmed the practicability of the method.« less

  11. Nonparametric relevance-shifted multiple testing procedures for the analysis of high-dimensional multivariate data with small sample sizes.

    PubMed

    Frömke, Cornelia; Hothorn, Ludwig A; Kropf, Siegfried

    2008-01-27

    In many research areas it is necessary to find differences between treatment groups with several variables. For example, studies of microarray data seek to find a significant difference in location parameters from zero or one for ratios thereof for each variable. However, in some studies a significant deviation of the difference in locations from zero (or 1 in terms of the ratio) is biologically meaningless. A relevant difference or ratio is sought in such cases. This article addresses the use of relevance-shifted tests on ratios for a multivariate parallel two-sample group design. Two empirical procedures are proposed which embed the relevance-shifted test on ratios. As both procedures test a hypothesis for each variable, the resulting multiple testing problem has to be considered. Hence, the procedures include a multiplicity correction. Both procedures are extensions of available procedures for point null hypotheses achieving exact control of the familywise error rate. Whereas the shift of the null hypothesis alone would give straight-forward solutions, the problems that are the reason for the empirical considerations discussed here arise by the fact that the shift is considered in both directions and the whole parameter space in between these two limits has to be accepted as null hypothesis. The first algorithm to be discussed uses a permutation algorithm, and is appropriate for designs with a moderately large number of observations. However, many experiments have limited sample sizes. Then the second procedure might be more appropriate, where multiplicity is corrected according to a concept of data-driven order of hypotheses.

  12. Resuspension of soil as a source of airborne lead near industrial facilities and highways.

    PubMed

    Young, Thomas M; Heeraman, Deo A; Sirin, Gorkem; Ashbaugh, Lowell L

    2002-06-01

    Geologic materials are an important source of airborne particulate matter less than 10 microm aerodynamic diameter (PM10), but the contribution of contaminated soil to concentrations of Pb and other trace elements in air has not been documented. To examine the potential significance of this mechanism, surface soil samples with a range of bulk soil Pb concentrations were obtained near five industrial facilities and along roadsides and were resuspended in a specially designed laboratory chamber. The concentration of Pb and other trace elements was measured in the bulk soil, in soil size fractions, and in PM10 generated during resuspension of soils and fractions. Average yields of PM10 from dry soils ranged from 0.169 to 0.869 mg of PM10/g of soil. Yields declined approximately linearly with increasing geometric mean particle size of the bulk soil. The resulting PM10 had average Pb concentrations as high as 2283 mg/kg for samples from a secondary Pb smelter. Pb was enriched in PM10 by 5.36-88.7 times as compared with uncontaminated California soils. Total production of PM10 bound Pb from the soil samples varied between 0.012 and 1.2 mg of Pb/kg of bulk soil. During a relatively large erosion event, a contaminated site might contribute approximately 300 ng/m3 of PM10-bound Pb to air. Contribution of soil from contaminated sites to airborne element balances thus deserves consideration when constructing receptor models for source apportionment or attempting to control airborne Pb emissions.

  13. Sample size determination in group-sequential clinical trials with two co-primary endpoints

    PubMed Central

    Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi

    2014-01-01

    We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799

  14. Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.

    PubMed

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2007-05-01

    Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.

  15. Microwave-assisted green synthesis of silver nanostructures.

    PubMed

    Nadagouda, Mallikarjuna N; Speth, Thomas F; Varma, Rajender S

    2011-07-19

    Over the past 25 years, microwave (MW) chemistry has moved from a laboratory curiosity to a well-established synthetic technique used in many academic and industrial laboratories around the world. Although the overwhelming number of MW-assisted applications today are still performed on a laboratory (mL) scale, we expect that this enabling technology may be used on a larger, perhaps even production, scale in conjunction with radio frequency or conventional heating. Microwave chemistry is based on two main principles, the dipolar mechanism and the electrical conductor mechanism. The dipolar mechanism occurs when, under a very high frequency electric field, a polar molecule attempts to follow the field in the same alignment. When this happens, the molecules release enough heat to drive the reaction forward. In the second mechanism, the irradiated sample is an electrical conductor and the charge carriers, ions and electrons, move through the material under the influence of the electric field and lead to polarization within the sample. These induced currents and any electrical resistance will heat the sample. This Account summarizes a microwave (MW)-assisted synthetic approach for producing silver nanostructures. MW heating has received considerable attention as a promising new method for the one-pot synthesis of metallic nanostructures in solutions. Researchers have successfully demonstrated the application of this method in the preparation of silver (Ag), gold (Au), platinum (Pt), and gold-palladium (Au-Pd) nanostructures. MW heating conditions allow not only for the preparation of spherical nanoparticles within a few minutes but also for the formation of single crystalline polygonal plates, sheets, rods, wires, tubes, and dendrites. The morphologies and sizes of the nanostructures can be controlled by changing various experimental parameters, such as the concentration of metallic salt precursors, the surfactant polymers, the chain length of the surfactant polymers, the solvents, and the operation reaction temperature. In general, nanostructures with smaller sizes, narrower size distributions, and a higher degree of crystallization have been obtained more consistently via MW heating than by heating with a conventional oil-bath. The use of microwaves to heat samples is a viable avenue for the greener synthesis of nanomaterials and provides several desirable features such as shorter reaction times, reduced energy consumption, and better product yields.

  16. Engaging workplace representatives in research: what recruitment strategies work best?

    PubMed

    Coole, C; Nouri, F; Narayanasamy, M; Baker, P; Khan, S; Drummond, A

    2018-05-23

    Workplaces are key stakeholders in work and health but little is known about the methods used to recruit workplace representatives (WRs), including managers, occupational health advisers and colleagues, to externally funded healthcare research studies. To detail the strategies used in recruiting WRs from three areas of the UK to a qualitative study concerning their experience of employees undergoing hip or knee replacement, to compare the strategies and inform recruitment methods for future studies. Six strategies were used to recruit WRs from organizations of different sizes and sectors. Data on numbers approached and responses received were analysed descriptively. Twenty-five WRs were recruited. Recruitment had to be extended outside the main three study areas, and took several months. It proved more difficult to recruit from non-service sectors and small- and medium-sized enterprises. The most successful strategies were approaching organizations that had participated in previous research studies, or known professionally or personally to team members. Recruiting a diverse sample of WRs to healthcare research requires considerable resources and persistence, and a range of strategies. Recruitment is easier where local relationships already exist; the importance of building and maintaining these relationships cannot be underestimated. However, the potential risks of bias and participant fatigue need to be acknowledged and managed. Further studies are needed to explore how WRs can be recruited to health research, and to identify the researcher effort and costs involved in achieving unbiased and representative samples.

  17. Theoretical basis, application, reliability, and sample size estimates of a Meridian Energy Analysis Device for Traditional Chinese Medicine Research.

    PubMed

    Tsai, Ming-Yen; Chen, Shih-Yu; Lin, Chung-Chun

    2017-04-01

    The Meridian Energy Analysis Device is currently a popular tool in the scientific research of meridian electrophysiology. In this field, it is generally believed that measuring the electrical conductivity of meridians provides information about the balance of bioenergy or Qi-blood in the body. PubMed database based on some original articles from 1956 to 2014 and the authoŕs clinical experience. In this short communication, we provide clinical examples of Meridian Energy Analysis Device application, especially in the field of traditional Chinese medicine, discuss the reliability of the measurements, and put the values obtained into context by considering items of considerable variability and by estimating sample size. The Meridian Energy Analysis Device is making a valuable contribution to the diagnosis of Qi-blood dysfunction. It can be assessed from short-term and long-term meridian bioenergy recordings. It is one of the few methods that allow outpatient traditional Chinese medicine diagnosis, monitoring the progress, therapeutic effect and evaluation of patient prognosis. The holistic approaches underlying the practice of traditional Chinese medicine and new trends in modern medicine toward the use of objective instruments require in-depth knowledge of the mechanisms of meridian energy, and the Meridian Energy Analysis Device can feasibly be used for understanding and interpreting traditional Chinese medicine theory, especially in view of its expansion in Western countries.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jomekian, A.; Faculty of Chemical Engineering, Iran University of Science and Technology; Behbahani, R.M., E-mail: behbahani@put.ac.ir

    Ultra porous ZIF-8 particles synthesized using PEO/PA6 based poly(ether-block-amide) (Pebax 1657) as structure directing agent. Structural properties of ZIF-8 samples prepared under different synthesis parameters were investigated by laser particle size analysis, XRD, N{sub 2} adsorption analysis, BJH and BET tests. The overall results showed that: (1) The mean pore size of all ZIF-8 samples increased remarkably (from 0.34 nm to 1.1–2.5 nm) compared to conventionally synthesized ZIF-8 samples. (2) Exceptional BET surface area of 1869 m{sup 2}/g was obtained for a ZIF-8 sample with mean pore size of 2.5 nm. (3) Applying high concentrations of Pebax 1657 to themore » synthesis solution lead to higher surface area, larger pore size and smaller particle size for ZIF-8 samples. (4) Both, Increase in temperature and decrease in molar ratio of MeIM/Zn{sup 2+} had increasing effect on ZIF-8 particle size, pore size, pore volume, crystallinity and BET surface area of all investigated samples. - Highlights: • The pore size of ZIF-8 samples synthesized with Pebax 1657 increased remarkably. • The BET surface area of 1869 m{sup 2}/gr obtained for a ZIF-8 synthesized sample with Pebax. • Increase in temperature had increasing effect on textural properties of ZIF-8 samples. • Decrease in MeIM/Zn{sup 2+} had increasing effect on textural properties of ZIF-8 samples.« less

  19. Observing the Global Water Cycle from Space

    NASA Technical Reports Server (NTRS)

    Hildebrand, P. H.

    2004-01-01

    This paper presents an approach to measuring all major components of the water cycle from space. Key elements of the global water cycle are discussed in terms of the storage of water-in the ocean, air, cloud and precipitation, in soil, ground water, snow and ice, and in lakes and rivers, and in terms of the global fluxes of water between these reservoirs. Approaches to measuring or otherwise evaluating the global water cycle are presented, and the limitations on known accuracy for many components of the water cycle are discussed, as are the characteristic spatial and temporal scales of the different water cycle components. Using these observational requirements for a global water cycle observing system, an approach to measuring the global water cycle from space is developed. The capabilities of various active and passive microwave instruments are discussed, as is the potential of supporting measurements from other sources. Examples of space observational systems, including TRMM/GPM precipitation measurement, cloud radars, soil moisture, sea surface salinity, temperature and humidity profiling, other measurement approaches and assimilation of the microwave and other data into interpretative computer models are discussed to develop the observational possibilities. The selection of orbits is then addressed, for orbit selection and antenna size/beamwidth considerations determine the sampling characteristics for satellite measurement systems. These considerations dictate a particular set of measurement possibilities, which are then matched to the observational sampling requirements based on the science. The results define a network of satellite instrumentation systems, many in low Earth orbit, a few in geostationary orbit, and all tied together through a sampling network that feeds the observations into a data-assimilative computer model.

  20. Effects of Calibration Sample Size and Item Bank Size on Ability Estimation in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Sahin, Alper; Weiss, David J.

    2015-01-01

    This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…

  1. Sample size calculations for case-control studies

    Cancer.gov

    This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.

  2. A novel bio-safe phase separation process for preparing open-pore biodegradable polycaprolactone microparticles.

    PubMed

    Salerno, Aurelio; Domingo, Concepción

    2014-09-01

    Open-pore biodegradable microparticles are object of considerable interest for biomedical applications, particularly as cell and drug delivery carriers in tissue engineering and health care treatments. Furthermore, the engineering of microparticles with well definite size distribution and pore architecture by bio-safe fabrication routes is crucial to avoid the use of toxic compounds potentially harmful to cells and biological tissues. To achieve this important issue, in the present study a straightforward and bio-safe approach for fabricating porous biodegradable microparticles with controlled morphological and structural features down to the nanometer scale is developed. In particular, ethyl lactate is used as a non-toxic solvent for polycaprolactone particles fabrication via a thermal induced phase separation technique. The used approach allows achieving open-pore particles with mean particle size in the 150-250 μm range and a 3.5-7.9 m(2)/g specific surface area. Finally, the combination of thermal induced phase separation and porogen leaching techniques is employed for the first time to obtain multi-scaled porous microparticles with large external and internal pore sizes and potential improved characteristics for cell culture and tissue engineering. Samples were characterized to assess their thermal properties, morphology and crystalline structure features and textural properties. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Sequential sampling: a novel method in farm animal welfare assessment.

    PubMed

    Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J

    2016-02-01

    Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.

  4. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    PubMed

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  5. Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed

    NASA Astrophysics Data System (ADS)

    Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi

    2010-05-01

    To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.

  6. Sample size and power calculations for detecting changes in malaria transmission using antibody seroconversion rate.

    PubMed

    Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris

    2015-12-30

    Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.

  7. Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology

    PubMed Central

    Vavrek, Matthew J.

    2015-01-01

    Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770

  8. Sources of variation in detection of wading birds from aerial surveys in the Florida Everglades

    USGS Publications Warehouse

    Conroy, M.J.; Peterson, J.T.; Bass, O.L.; Fonnesbeck, C.J.; Howell, J.E.; Moore, C.T.; Runge, J.P.

    2008-01-01

    We conducted dual-observer trials to estimate detection probabilities (probability that a group that is present and available is detected) for fixed-wing aerial surveys of wading birds in the Everglades system, Florida. Detection probability ranged from <0.2 to similar to 0.75 and varied according to species, group size, observer, and the observer's position in the aircraft (front or rear seat). Aerial-survey simulations indicated that incomplete detection can have a substantial effect oil assessment of population trends, particularly river relatively short intervals (<= 3 years) and small annual changes in population size (<= 3%). We conclude that detection bias is an important consideration for interpreting observations from aerial surveys of wading birds, potentially limiting the use of these data for comparative purposes and trend analyses. We recommend that workers conducting aerial surveys for wading birds endeavor to reduce observer and other controllable sources of detection bias and account for uncontrollable sources through incorporation of dual-observer or other calibratior methods as part of survey design (e.g., using double sampling).

  9. Memory bias for threatening information in anxiety and anxiety disorders: a meta-analytic review.

    PubMed

    Mitte, Kristin

    2008-11-01

    Although some theories suggest that anxious individuals selectively remember threatening stimuli, findings remain contradictory despite a considerable amount of research. A quantitative integration of 165 studies with 9,046 participants (clinical and nonclinical samples) examined whether a memory bias exists and which moderator variables influence its magnitude. Implicit memory bias was investigated in lexical decision/stimulus identification and word-stem completion paradigms; explicit memory bias was investigated in recognition and recall paradigms. Overall, effect sizes showed no significant impact of anxiety on implicit memory and recognition. Analyses indicated a memory bias for recall, whose magnitude depended on experimental study procedures like the encoding procedure or retention interval. Anxiety influenced recollection of previous experiences; anxious individuals favored threat-related information. Across all paradigms, clinical status was not significantly linked to effect sizes, indicating no qualitative difference in information processing between anxiety patients and high-anxious persons. The large discrepancy between study effects in recall and recognition indicates that future research is needed to identify moderator variables for avoidant and preferred remembering.

  10. Environmental heterogeneity, dispersal mode, and co-occurrence in stream macroinvertebrates

    PubMed Central

    Heino, Jani

    2013-01-01

    Both environmental heterogeneity and mode of dispersal may affect species co-occurrence in metacommunities. Aquatic invertebrates were sampled in 20–30 streams in each of three drainage basins, differing considerably in environmental heterogeneity. Each drainage basin was further divided into two equally sized sets of sites, again differing profoundly in environmental heterogeneity. Benthic invertebrate data were divided into three groups of taxa based on overland dispersal modes: passive dispersers with aquatic adults, passive dispersers with terrestrial winged adults, and active dispersers with terrestrial winged adults. The co-occurrence of taxa in each dispersal mode group, drainage basin, and heterogeneity site subset was measured using the C-score and its standardized effect size. The probability of finding high levels of species segregation tended to increase with environmental heterogeneity across the drainage basins. These patterns were, however, contingent on both dispersal mode and drainage basin. It thus appears that environmental heterogeneity and dispersal mode interact in affecting co-occurrence in metacommunities, with passive dispersers with aquatic adults showing random patterns irrespective of environmental heterogeneity, and active dispersers with terrestrial winged adults showing increasing segregation with increasing environmental heterogeneity. PMID:23467653

  11. Spatial patch occupancy patterns of the Lower Keys marsh rabbit

    USGS Publications Warehouse

    Eaton, Mitchell J.; Hughes, Phillip T.; Nichols, James D.; Morkill, Anne; Anderson, Chad

    2011-01-01

    Reliable estimates of presence or absence of a species can provide substantial information on management questions related to distribution and habitat use but should incorporate the probability of detection to reduce bias. We surveyed for the endangered Lower Keys marsh rabbit (Sylvilagus palustris hefneri) in habitat patches on 5 Florida Key islands, USA, to estimate occupancy and detection probabilities. We derived detection probabilities using spatial replication of plots and evaluated hypotheses that patch location (coastal or interior) and patch size influence occupancy and detection. Results demonstrate that detection probability, given rabbits were present, was <0.5 and suggest that naïve estimates (i.e., estimates without consideration of imperfect detection) of patch occupancy are negatively biased. We found that patch size and location influenced probability of occupancy but not detection. Our findings will be used by Refuge managers to evaluate population trends of Lower Keys marsh rabbits from historical data and to guide management decisions for species recovery. The sampling and analytical methods we used may be useful for researchers and managers of other endangered lagomorphs and cryptic or fossorial animals occupying diverse habitats.

  12. Distribution of human waste samples in relation to sizing waste processing in space

    NASA Technical Reports Server (NTRS)

    Parker, Dick; Gallagher, S. K.

    1992-01-01

    Human waste processing for closed ecological life support systems (CELSS) in space requires that there be an accurate knowledge of the quantity of wastes produced. Because initial CELSS will be handling relatively few individuals, it is important to know the variation that exists in the production of wastes rather than relying upon mean values that could result in undersizing equipment for a specific crew. On the other hand, because of the costs of orbiting equipment, it is important to design the equipment with a minimum of excess capacity because of the weight that extra capacity represents. A considerable quantity of information that had been independently gathered on waste production was examined in order to obtain estimates of equipment sizing requirements for handling waste loads from crews of 2 to 20 individuals. The recommended design for a crew of 8 should hold 34.5 liters per day (4315 ml/person/day) for urine and stool water and a little more than 1.25 kg per day (154 g/person/day) of human waste solids and sanitary supplies.

  13. On the Solidification and Structure Formation during Casting of Large Inserts in Ferritic Nodular Cast Iron

    NASA Astrophysics Data System (ADS)

    Tadesse, Abel; Fredriksson, Hasse

    2018-06-01

    The graphite nodule count and size distributions for boiling water reactor (BWR) and pressurized water reactor (PWR) inserts were investigated by taking samples at heights of 2160 and 1150 mm, respectively. In each cross section, two locations were taken into consideration for both the microstructural and solidification modeling. The numerical solidification modeling was performed in a two-dimensional model by considering the nucleation and growth in eutectic ductile cast iron. The microstructural results reveal that the nodule size and count distribution along the cross sections are different in each location for both inserts. Finer graphite nodules appear in the thinner sections and close to the mold walls. The coarser nodules are distributed mostly in the last solidified location. The simulation result indicates that the finer nodules are related to a higher cooling rate and a lower degree of microsegregation, whereas the coarser nodules are related to a lower cooling rate and a higher degree of microsegregation. The solidification time interval and the last solidifying locations in the BWR and PWR are also different.

  14. Improving Secondary Ion Mass Spectrometry Image Quality with Image Fusion

    NASA Astrophysics Data System (ADS)

    Tarolli, Jay G.; Jackson, Lauren M.; Winograd, Nicholas

    2014-12-01

    The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images but also by detection sensitivity. As the probe size is reduced to below 1 μm, for example, a low signal in each pixel limits lateral resolution because of counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure.

  15. In Situ Aerosol Detector

    NASA Technical Reports Server (NTRS)

    Vakhtin, Andrei; Krasnoperov, Lev

    2011-01-01

    An affordable technology designed to facilitate extensive global atmospheric aerosol measurements has been developed. This lightweight instrument is compatible with newly developed platforms such as tethered balloons, blimps, kites, and even disposable instruments such as dropsondes. This technology is based on detection of light scattered by aerosol particles where an optical layout is used to enhance the performance of the laboratory prototype instrument, which allows detection of smaller aerosol particles and improves the accuracy of aerosol particle size measurement. It has been determined that using focused illumination geometry without any apertures is advantageous over using the originally proposed collimated beam/slit geometry (that is supposed to produce uniform illumination over the beam cross-section). The illumination source is used more efficiently, which allows detection of smaller aerosol particles. Second, the obtained integral scattered light intensity measured for the particle can be corrected for the beam intensity profile inhomogeneity based on the measured beam intensity profile and measured particle location. The particle location (coordinates) in the illuminated sample volume is determined based on the information contained in the image frame. The procedure considerably improves the accuracy of determination of the aerosol particle size.

  16. Submicron polycaprolactone particles as a carrier for imaging contrast agent for in vitro applications.

    PubMed

    Iqbal, Muhammad; Robin, Sophie; Humbert, Philippe; Viennet, Céline; Agusti, Geraldine; Fessi, Hatem; Elaissari, Abdelhamid

    2015-12-01

    Fluorescent materials have recently attracted considerable attention due to their unique properties and high performance as imaging agent in biomedical fields. Different imaging agents have been encapsulated in order to restrict its delivery to a specific area. In this study, a fluorescent contrast agent was encapsulated for in vitro application by polycaprolactone (PCL) polymer. The encapsulation was performed using modified double emulsion solvent evaporation technique with sonication. Fluorescent nanoparticles (20 nm) were incorporated in the inner aqueous phase of double emulsion. A number of samples were fabricated using different concentrations of fluorescent contrast agent. The contrast agent-containing submicron particle was characterized by a zetasizer for average particle size, SEM and TEM for morphology observations and fluorescence spectrophotometer for encapsulation efficiency. Moreover, contrast agent distribution in the PCL matrix was determined by confocal microscopy. The incorporation of contrast agent in different concentrations did not affect the physicochemical properties of PCL particles and the average size of encapsulated particles was found to be in the submicron range. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Prevalence and risk factors for Maedi-Visna in sheep farms in Mecklenburg-Western-Pomerania.

    PubMed

    Hüttner, Klim; Seelmann, Matthias; Feldhusen, Frerk

    2010-01-01

    Despite indications of a considerable spread of Maedi-Visna among sheep flocks in Germany, prevalence studies of this important infection are hardly available. Prior to any health schemes and guidelines, knowledge about regional disease distribution is essential. Depending upon herd size, 70 farms were randomly selected, of which 41 cooperated. A total of 2229 blood samples were taken at random and serologically examined. For assessment of selected farm characteristics a questionnaire exercise was conducted at all farms involved. The average herd prevalence is 51.2%, the within-herd prevalence is 28,8%. In the unvariate analysis of risk factors, small (10-100 sheep) and large (> 250 sheep) farms are more MVV-affected than medium sized farms. The average stable and pasture space per sheep is larger at non-infected- compared to infected farms. Owners judgement on general herd health turns out to be better at non-infected compared to infected farms. Taking infected farms only, the risk of within-herd prevalence above 20% is significant higher in crossbred than in purebred flocks.

  18. Finite element analysis of the upsetting of a 5056 aluminum alloy sample with consideration of its microstructure

    NASA Astrophysics Data System (ADS)

    Voronin, S. V.; Chaplygin, K. K.

    2017-12-01

    Computer simulation of upsetting the finite element models (FEMs) of an isotropic 5056 aluminum alloy sample and a 5056 aluminum alloy sample with consideration of microstructure is carried out. The stress and strain distribution patterns at different process stages are obtained. The strain required for the deformation of the FEMs of 5056 alloy samples is determined. The influence of the material microstructure on the stress-strain behavior and technological parameters are demonstrated.

  19. Characterization of infectious aerosols in health care facilities: an aid to effective engineering controls and preventive strategies.

    PubMed

    Cole, E C; Cook, C E

    1998-08-01

    Assessment of strategies for engineering controls for the prevention of airborne infectious disease transmission to patients and to health care and related workers requires consideration of the factors relevant to aerosol characterization. These factors include aerosol generation, particle size and concentrations, organism viability, infectivity and virulence, airflow and climate, and environmental sampling and analysis. The major focus on attention to engineering controls comes from recent increases in tuberculosis, particularly the multidrug-resistant varieties in the general hospital population, the severely immunocompromised, and those in at-risk and confined environments such as prisons, long-term care facilities, and shelters for the homeless. Many workers are in close contact with persons who have active, undiagnosed, or insufficiently treated tuberculosis. Additionally, patients and health care workers may be exposed to a variety of pathogenic human viruses, opportunistic fungi, and bacteria. This report therefore focuses on the nature of infectious aerosol transmission in an attempt to determine which factors can be systematically addressed to result in proven, applied engineering approaches to the control of infectious aerosols in hospital and health care facility environments. The infectious aerosols of consideration are those that are generated as particles of respirable size by both human and environmental sources and that have the capability of remaining viable and airborne for extended periods in the indoor environment. This definition precludes skin and mucous membrane exposures occurring from splashes (rather than true aerosols) of blood or body fluids containing infectious disease agents. There are no epidemiologic or laboratory studies documenting the transmission of bloodborne virus by way of aerosols.

  20. Measuring neuroplasticity associated with cerebral palsy rehabilitation: An MRI based power analysis.

    PubMed

    Reid, Lee B; Pagnozzi, Alex M; Fiori, Simona; Boyd, Roslyn N; Dowson, Nicholas; Rose, Stephen E

    2017-05-01

    Researchers in the field of child neurology are increasingly looking to supplement clinical trials of motor rehabilitation with neuroimaging in order to better understand the relationship between behavioural training, brain changes, and clinical improvements. Randomised controlled trials are typically accompanied by sample size calculations to detect clinical improvements but, despite the large cost of neuroimaging, not equivalent calculations for concurrently acquired imaging neuroimaging measures of changes in response to intervention. To aid in this regard, a power analysis was conducted for two measures of brain changes that may be indexed in a trial of rehabilitative therapy for cerebral palsy: cortical thickness of the impaired primary sensorimotor cortex, and fractional anisotropy of the impaired, delineated corticospinal tract. Power for measuring fractional anisotropy was assessed for both region-of-interest-seeded and fMRI-seeded diffusion tractography. Taking into account practical limitations, as well as data loss due to behavioural and image-processing issues, estimated required participant numbers were 101, 128 and 59 for cortical thickness, region-of-interest-based tractography, and fMRI-seeded tractography, respectively. These numbers are not adjusted for study attrition. Although these participant numbers may be out of reach of many trials, several options are available to improve statistical power, including careful preparation of participants for scanning using mock simulators, careful consideration of image processing options, and enrolment of as homogeneous a cohort as possible. This work suggests that smaller and moderate sized studies give genuine consideration to harmonising scanning protocols between groups to allow the pooling of data. Copyright © 2017 ISDN. All rights reserved.

Top