Sample records for expected sample sizes

  1. Minimizing the Maximum Expected Sample Size in Two-Stage Phase II Clinical Trials with Continuous Outcomes

    PubMed Central

    Wason, James M. S.; Mander, Adrian P.

    2012-01-01

    Two-stage designs are commonly used for Phase II trials. Optimal two-stage designs have the lowest expected sample size for a specific treatment effect, for example, the null value, but can perform poorly if the true treatment effect differs. Here we introduce a design for continuous treatment responses that minimizes the maximum expected sample size across all possible treatment effects. The proposed design performs well for a wider range of treatment effects and so is useful for Phase II trials. We compare the design to a previously used optimal design and show it has superior expected sample size properties. PMID:22651118

  2. Sample size calculations for randomized clinical trials published in anesthesiology journals: a comparison of 2010 versus 2016.

    PubMed

    Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M

    2018-06-01

    Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.

  3. A Bayesian approach for incorporating economic factors in sample size design for clinical trials of individual drugs and portfolios of drugs.

    PubMed

    Patel, Nitin R; Ankolekar, Suresh

    2007-11-30

    Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.

  4. Accounting for between-study variation in incremental net benefit in value of information methodology.

    PubMed

    Willan, Andrew R; Eckermann, Simon

    2012-10-01

    Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.

  5. The choice of sample size: a mixed Bayesian / frequentist approach.

    PubMed

    Pezeshk, Hamid; Nematollahi, Nader; Maroufy, Vahed; Gittins, John

    2009-04-01

    Sample size computations are largely based on frequentist or classical methods. In the Bayesian approach the prior information on the unknown parameters is taken into account. In this work we consider a fully Bayesian approach to the sample size determination problem which was introduced by Grundy et al. and developed by Lindley. This approach treats the problem as a decision problem and employs a utility function to find the optimal sample size of a trial. Furthermore, we assume that a regulatory authority, which is deciding on whether or not to grant a licence to a new treatment, uses a frequentist approach. We then find the optimal sample size for the trial by maximising the expected net benefit, which is the expected benefit of subsequent use of the new treatment minus the cost of the trial.

  6. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Approximate Sample Size Formulas for Testing Group Mean Differences when Variances Are Unequal in One-Way ANOVA

    ERIC Educational Resources Information Center

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2008-01-01

    This study proposes an approach for determining appropriate sample size for Welch's F test when unequal variances are expected. Given a certain maximum deviation in population means and using the quantile of F and t distributions, there is no need to specify a noncentrality parameter and it is easy to estimate the approximate sample size needed…

  8. Valuing Trial Designs from a Pharmaceutical Perspective Using Value-Based Pricing.

    PubMed

    Breeze, Penny; Brennan, Alan

    2015-11-01

    Our aim was to adapt the traditional framework for expected net benefit of sampling (ENBS) to be more compatible with drug development trials from the pharmaceutical perspective. We modify the traditional framework for conducting ENBS and assume that the price of the drug is conditional on the trial outcomes. We use a value-based pricing (VBP) criterion to determine price conditional on trial data using Bayesian updating of cost-effectiveness (CE) model parameters. We assume that there is a threshold price below which the company would not market the new intervention. We present a case study in which a phase III trial sample size and trial duration are varied. For each trial design, we sampled 10,000 trial outcomes and estimated VBP using a CE model. The expected commercial net benefit is calculated as the expected profits minus the trial costs. A clinical trial with shorter follow-up, and larger sample size, generated the greatest expected commercial net benefit. Increasing the duration of follow-up had a modest impact on profit forecasts. Expected net benefit of sampling can be adapted to value clinical trials in the pharmaceutical industry to optimise the expected commercial net benefit. However, the analyses can be very time consuming for complex CE models. © 2014 The Authors. Health Economics published by John Wiley & Sons Ltd.

  9. Repeated significance tests of linear combinations of sensitivity and specificity of a diagnostic biomarker

    PubMed Central

    Wu, Mixia; Shu, Yu; Li, Zhaohai; Liu, Aiyi

    2016-01-01

    A sequential design is proposed to test whether the accuracy of a binary diagnostic biomarker meets the minimal level of acceptance. The accuracy of a binary diagnostic biomarker is a linear combination of the marker’s sensitivity and specificity. The objective of the sequential method is to minimize the maximum expected sample size under the null hypothesis that the marker’s accuracy is below the minimal level of acceptance. The exact results of two-stage designs based on Youden’s index and efficiency indicate that the maximum expected sample sizes are smaller than the sample sizes of the fixed designs. Exact methods are also developed for estimation, confidence interval and p-value concerning the proposed accuracy index upon termination of the sequential testing. PMID:26947768

  10. Breaking Free of Sample Size Dogma to Perform Innovative Translational Research

    PubMed Central

    Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.

    2011-01-01

    Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197

  11. Influence of BMI and dietary restraint on self-selected portions of prepared meals in US women.

    PubMed

    Labbe, David; Rytz, Andréas; Brunstrom, Jeffrey M; Forde, Ciarán G; Martin, Nathalie

    2017-04-01

    The rise of obesity prevalence has been attributed in part to an increase in food and beverage portion sizes selected and consumed among overweight and obese consumers. Nevertheless, evidence from observations of adults is mixed and contradictory findings might reflect the use of small or unrepresentative samples. The objective of this study was i) to determine the extent to which BMI and dietary restraint predict self-selected portion sizes for a range of commercially available prepared savoury meals and ii) to consider the importance of these variables relative to two previously established predictors of portion selection, expected satiation and expected liking. A representative sample of female consumers (N = 300, range 18-55 years) evaluated 15 frozen savoury prepared meals. For each meal, participants rated their expected satiation and expected liking, and selected their ideal portion using a previously validated computer-based task. Dietary restraint was quantified using the Dutch Eating Behaviour Questionnaire (DEBQ-R). Hierarchical multiple regression was performed on self-selected portions with age, hunger level, and meal familiarity entered as control variables in the first step of the model, expected satiation and expected liking as predictor variables in the second step, and DEBQ-R and BMI as exploratory predictor variables in the third step. The second and third steps significantly explained variance in portion size selection (18% and 4%, respectively). Larger portion selections were significantly associated with lower dietary restraint and with lower expected satiation. There was a positive relationship between BMI and portion size selection (p = 0.06) and between expected liking and portion size selection (p = 0.06). Our discussion considers future research directions, the limited variance explained by our model, and the potential for portion size underreporting by overweight participants. Copyright © 2016 Nestec S.A. Published by Elsevier Ltd.. All rights reserved.

  12. Coalescence computations for large samples drawn from populations of time-varying sizes

    PubMed Central

    Polanski, Andrzej; Szczesna, Agnieszka; Garbulowski, Mateusz; Kimmel, Marek

    2017-01-01

    We present new results concerning probability distributions of times in the coalescence tree and expected allele frequencies for coalescent with large sample size. The obtained results are based on computational methodologies, which involve combining coalescence time scale changes with techniques of integral transformations and using analytical formulae for infinite products. We show applications of the proposed methodologies for computing probability distributions of times in the coalescence tree and their limits, for evaluation of accuracy of approximate expressions for times in the coalescence tree and expected allele frequencies, and for analysis of large human mitochondrial DNA dataset. PMID:28170404

  13. Damage Accumulation in Silica Glass Nanofibers.

    PubMed

    Bonfanti, Silvia; Ferrero, Ezequiel E; Sellerio, Alessandro L; Guerra, Roberto; Zapperi, Stefano

    2018-06-06

    The origin of the brittle-to-ductile transition, experimentally observed in amorphous silica nanofibers as the sample size is reduced, is still debated. Here we investigate the issue by extensive molecular dynamics simulations at low and room temperatures for a broad range of sample sizes, with open and periodic boundary conditions. Our results show that small sample-size enhanced ductility is primarily due to diffuse damage accumulation, that for larger samples leads to brittle catastrophic failure. Surface effects such as boundary fluidization contribute to ductility at room temperature by promoting necking, but are not the main driver of the transition. Our results suggest that the experimentally observed size-induced ductility of silica nanofibers is a manifestation of finite-size criticality, as expected in general for quasi-brittle disordered networks.

  14. Phase II Trials for Heterogeneous Patient Populations with a Time-to-Event Endpoint.

    PubMed

    Jung, Sin-Ho

    2017-07-01

    In this paper, we consider a single-arm phase II trial with a time-to-event end-point. We assume that the study population has multiple subpopulations with different prognosis, but the study treatment is expected to be similarly efficacious across the subpopulations. We review a stratified one-sample log-rank test and present its sample size calculation method under some practical design settings. Our sample size method requires specification of the prevalence of subpopulations. We observe that the power of the resulting sample size is not very sensitive to misspecification of the prevalence.

  15. Are There Scenarios When the Use of Non-Placebo-Control Groups in Experimental Trial Designs Increase Expected Value to Society?

    PubMed

    Uyei, Jennifer; Braithwaite, R Scott

    2016-01-01

    Despite the benefits of the placebo-controlled trial design, it is limited by its inability to quantify total benefits and harms. Such trials, for example, are not designed to detect an intervention's placebo or nocebo effects, which if detected could alter the benefit-to-harm balance and change a decision to adopt or reject an intervention. In this article, we explore scenarios in which alternative experimental trial designs, which differ in the type of control used, influence expected value across a range of pretest assumptions and study sample sizes. We developed a decision model to compare 3 trial designs and their implications for decision making: 2-arm placebo-controlled trial ("placebo-control"), 2-arm intervention v. do nothing trial ("null-control"), and an innovative 3-arm trial design: intervention v. do nothing v. placebo trial ("novel design"). Four scenarios were explored regarding particular attributes of a hypothetical intervention: 1) all benefits and no harm, 2) no biological effect, 3) only biological effects, and 4) surreptitious harm (no biological benefit or nocebo effect). Scenario 1: When sample sizes were very small, the null-control was preferred, but as sample sizes increased, expected value of all 3 designs converged. Scenario 2: The null-control was preferred regardless of sample size when the ratio of placebo to nocebo effect was >1; otherwise, the placebo-control was preferred. Scenario 3: When sample size was very small, the placebo-control was preferred when benefits outweighed harms, but the novel design was preferred when harms outweighed benefits. Scenario 4: The placebo-control was preferred when harms outweighed placebo benefits; otherwise, preference went to the null-control. Scenarios are hypothetical, study designs have not been tested in a real-world setting, blinding is not possible in all designs, and some may argue the novel design poses ethical concerns. We identified scenarios in which alternative experimental study designs would confer greater expected value than the placebo-controlled trial design. The likelihood and prevalence of such situations warrant further study. © The Author(s) 2015.

  16. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    PubMed

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Clinical decision making and the expected value of information.

    PubMed

    Willan, Andrew R

    2007-01-01

    The results of the HOPE study, a randomized clinical trial, provide strong evidence that 1) ramipril prevents the composite outcome of cardiovascular death, myocardial infarction or stroke in patients who are at high risk of a cardiovascular event and 2) ramipril is cost-effective at a threshold willingness-to-pay of $10,000 to prevent an event of the composite outcome. In this report the concept of the expected value of information is used to determine if the information provided by the HOPE study is sufficient for decision making in the US and Canada. and results Using the cost-effectiveness data from a clinical trial, or from a meta-analysis of several trials, one can determine, based on the number of future patients that would benefit from the health technology under investigation, the expected value of sample information (EVSI) of a future trial as a function of proposed sample size. If the EVSI exceeds the cost for any particular sample size then the current information is insufficient for decision making and a future trial is indicated. If, on the other hand, there is no sample size for which the EVSI exceeds the cost, then there is sufficient information for decision making and no future trial is required. Using the data from the HOPE study these concepts are applied for various assumptions regarding the fixed and variable cost of a future trial and the number of patients who would benefit from ramipril. Expected value of information methods provide a decision-analytic alternative to the standard likelihood methods for assessing the evidence provided by cost-effectiveness data from randomized clinical trials.

  18. The Effect of Pixel Size on the Accuracy of Orthophoto Production

    NASA Astrophysics Data System (ADS)

    Kulur, S.; Yildiz, F.; Selcuk, O.; Yildiz, M. A.

    2016-06-01

    In our country, orthophoto products are used by the public and private sectors for engineering services and infrastructure projects, Orthophotos are particularly preferred due to faster and are more economical production according to vector digital photogrammetric production. Today, digital orthophotos provide an expected accuracy for engineering and infrastructure projects. In this study, the accuracy of orthophotos using pixel sizes with different sampling intervals are tested for the expectations of engineering and infrastructure projects.

  19. Expected values and variances of Bragg peak intensities measured in a nanocrystalline powder diffraction experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Öztürk, Hande; Noyan, I. Cevdet

    A rigorous study of sampling and intensity statistics applicable for a powder diffraction experiment as a function of crystallite size is presented. Our analysis yields approximate equations for the expected value, variance and standard deviations for both the number of diffracting grains and the corresponding diffracted intensity for a given Bragg peak. The classical formalism published in 1948 by Alexander, Klug & Kummer [J. Appl. Phys.(1948),19, 742–753] appears as a special case, limited to large crystallite sizes, here. It is observed that both the Lorentz probability expression and the statistics equations used in the classical formalism are inapplicable for nanocrystallinemore » powder samples.« less

  20. Expected values and variances of Bragg peak intensities measured in a nanocrystalline powder diffraction experiment

    DOE PAGES

    Öztürk, Hande; Noyan, I. Cevdet

    2017-08-24

    A rigorous study of sampling and intensity statistics applicable for a powder diffraction experiment as a function of crystallite size is presented. Our analysis yields approximate equations for the expected value, variance and standard deviations for both the number of diffracting grains and the corresponding diffracted intensity for a given Bragg peak. The classical formalism published in 1948 by Alexander, Klug & Kummer [J. Appl. Phys.(1948),19, 742–753] appears as a special case, limited to large crystallite sizes, here. It is observed that both the Lorentz probability expression and the statistics equations used in the classical formalism are inapplicable for nanocrystallinemore » powder samples.« less

  1. Efficient computation of the joint sample frequency spectra for multiple populations.

    PubMed

    Kamm, John A; Terhorst, Jonathan; Song, Yun S

    2017-01-01

    A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.

  2. Efficient computation of the joint sample frequency spectra for multiple populations

    PubMed Central

    Kamm, John A.; Terhorst, Jonathan; Song, Yun S.

    2016-01-01

    A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity. PMID:28239248

  3. How Sample Size Affects a Sampling Distribution

    ERIC Educational Resources Information Center

    Mulekar, Madhuri S.; Siegel, Murray H.

    2009-01-01

    If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…

  4. Sample size requirements for the design of reliability studies: precision consideration.

    PubMed

    Shieh, Gwowen

    2014-09-01

    In multilevel modeling, the intraclass correlation coefficient based on the one-way random-effects model is routinely employed to measure the reliability or degree of resemblance among group members. To facilitate the advocated practice of reporting confidence intervals in future reliability studies, this article presents exact sample size procedures for precise interval estimation of the intraclass correlation coefficient under various allocation and cost structures. Although the suggested approaches do not admit explicit sample size formulas and require special algorithms for carrying out iterative computations, they are more accurate than the closed-form formulas constructed from large-sample approximations with respect to the expected width and assurance probability criteria. This investigation notes the deficiency of existing methods and expands the sample size methodology for the design of reliability studies that have not previously been discussed in the literature.

  5. Sample size determination for estimating antibody seroconversion rate under stable malaria transmission intensity.

    PubMed

    Sepúlveda, Nuno; Drakeley, Chris

    2015-04-03

    In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.

  6. Bayes factor design analysis: Planning for compelling evidence.

    PubMed

    Schönbrodt, Felix D; Wagenmakers, Eric-Jan

    2018-02-01

    A sizeable literature exists on the use of frequentist power analysis in the null-hypothesis significance testing (NHST) paradigm to facilitate the design of informative experiments. In contrast, there is almost no literature that discusses the design of experiments when Bayes factors (BFs) are used as a measure of evidence. Here we explore Bayes Factor Design Analysis (BFDA) as a useful tool to design studies for maximum efficiency and informativeness. We elaborate on three possible BF designs, (a) a fixed-n design, (b) an open-ended Sequential Bayes Factor (SBF) design, where researchers can test after each participant and can stop data collection whenever there is strong evidence for either [Formula: see text] or [Formula: see text], and (c) a modified SBF design that defines a maximal sample size where data collection is stopped regardless of the current state of evidence. We demonstrate how the properties of each design (i.e., expected strength of evidence, expected sample size, expected probability of misleading evidence, expected probability of weak evidence) can be evaluated using Monte Carlo simulations and equip researchers with the necessary information to compute their own Bayesian design analyses.

  7. The size distribution of inhabited planets

    NASA Astrophysics Data System (ADS)

    Simpson, Fergus

    2016-02-01

    Earth-like planets are expected to provide the greatest opportunity for the detection of life beyond the Solar system. However, our planet cannot be considered a fair sample, especially if intelligent life exists elsewhere. Just as a person's country of origin is a biased sample among countries, so too their planet of origin may be a biased sample among planets. The magnitude of this effect can be substantial: over 98 per cent of the world's population live in a country larger than the median. In the context of a simple model where the mean population density is invariant to planet size, we infer that a given inhabited planet (such as our nearest neighbour) has a radius r < 1.2r⊕ (95 per cent confidence bound). We show that this result is likely to hold not only for planets hosting advanced life, but also for those which harbour primitive life forms. Further, inferences may be drawn for any variable which influences population size. For example, since population density is widely observed to decline with increasing body mass, we conclude that most intelligent species are expected to exceed 300 kg.

  8. Variable-Size Bead Layer as Standard Reference for Endothelial Microscopes.

    PubMed

    Tufo, Simona; Prazzoli, Erica; Ferraro, Lorenzo; Cozza, Federica; Borghesi, Alessandro; Tavazzi, Silvia

    2017-02-01

    For morphometric analysis of the cell mosaic of corneal endothelium, checking accuracy and precision of instrumentation is a key step. In this study, a standard reference sample is proposed, developed to reproduce the cornea with its shape and the endothelium with its intrinsic variability in the cell size. A polystyrene bead layer (representing the endothelium) was deposited on a lens (representing the cornea). Bead diameters were 20, 25, and 30 μm (fractions in number 55%, 30%, and 15%, respectively). Bead density and hexagonality were simulated to obtain the expected true values and measured using a slit-lamp endothelial microscope applied to 1) a Takagi 700GL slit lamp at 40× magnification (recommended standard setup) and 2) a Takagi 2ZL slit lamp at 25× magnification. The simulation provided the expected bead density 2001 mm and hexagonality 47%. At 40×, density and hexagonality were measured to be 2009 mm (SD 93 mm) and 45% (SD 3%). At 25× on a different slit lamp, the comparison between measured and expected densities provided the factor 1.526 to resize the image and to use the current algorithms of the slit-lamp endothelial microscope for cell recognition. A variable-size polystyrene bead layer on a lens is proposed as a standard sample mimicking the real shape of the cornea and the variability of cell size and cell arrangement of corneal endothelium. The sample is suggested to evaluate accuracy and precision of cell density and hexagonality obtained by different endothelial microscopes, including a slit-lamp endothelial microscope applied to different slit lamps, also at different magnifications.

  9. A Model Based Approach to Sample Size Estimation in Recent Onset Type 1 Diabetes

    PubMed Central

    Bundy, Brian; Krischer, Jeffrey P.

    2016-01-01

    The area under the curve C-peptide following a 2-hour mixed meal tolerance test from 481 individuals enrolled on 5 prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrollment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in Observed vs. Expected calculations to estimate the presumption of benefit in ongoing trials. PMID:26991448

  10. Implementing Generalized Additive Models to Estimate the Expected Value of Sample Information in a Microsimulation Model: Results of Three Case Studies.

    PubMed

    Rabideau, Dustin J; Pei, Pamela P; Walensky, Rochelle P; Zheng, Amy; Parker, Robert A

    2018-02-01

    The expected value of sample information (EVSI) can help prioritize research but its application is hampered by computational infeasibility, especially for complex models. We investigated an approach by Strong and colleagues to estimate EVSI by applying generalized additive models (GAM) to results generated from a probabilistic sensitivity analysis (PSA). For 3 potential HIV prevention and treatment strategies, we estimated life expectancy and lifetime costs using the Cost-effectiveness of Preventing AIDS Complications (CEPAC) model, a complex patient-level microsimulation model of HIV progression. We fitted a GAM-a flexible regression model that estimates the functional form as part of the model fitting process-to the incremental net monetary benefits obtained from the CEPAC PSA. For each case study, we calculated the expected value of partial perfect information (EVPPI) using both the conventional nested Monte Carlo approach and the GAM approach. EVSI was calculated using the GAM approach. For all 3 case studies, the GAM approach consistently gave similar estimates of EVPPI compared with the conventional approach. The EVSI behaved as expected: it increased and converged to EVPPI for larger sample sizes. For each case study, generating the PSA results for the GAM approach required 3 to 4 days on a shared cluster, after which EVPPI and EVSI across a range of sample sizes were evaluated in minutes. The conventional approach required approximately 5 weeks for the EVPPI calculation alone. Estimating EVSI using the GAM approach with results from a PSA dramatically reduced the time required to conduct a computationally intense project, which would otherwise have been impractical. Using the GAM approach, we can efficiently provide policy makers with EVSI estimates, even for complex patient-level microsimulation models.

  11. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    PubMed

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  12. A model-based approach to sample size estimation in recent onset type 1 diabetes.

    PubMed

    Bundy, Brian N; Krischer, Jeffrey P

    2016-11-01

    The area under the curve C-peptide following a 2-h mixed meal tolerance test from 498 individuals enrolled on five prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrolment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors, and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in observed versus expected calculations to estimate the presumption of benefit in ongoing trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Shrinking of silicon nanocrystals embedded in an amorphous silicon oxide matrix during rapid thermal annealing in a forming gas atmosphere

    NASA Astrophysics Data System (ADS)

    van Sebille, M.; Fusi, A.; Xie, L.; Ali, H.; van Swaaij, R. A. C. M. M.; Leifer, K.; Zeman, M.

    2016-09-01

    We report the effect of hydrogen on the crystallization process of silicon nanocrystals embedded in a silicon oxide matrix. We show that hydrogen gas during annealing leads to a lower sub-band gap absorption, indicating passivation of defects created during annealing. Samples annealed in pure nitrogen show expected trends according to crystallization theory. Samples annealed in forming gas, however, deviate from this trend. Their crystallinity decreases for increased annealing time. Furthermore, we observe a decrease in the mean nanocrystal size and the size distribution broadens, indicating that hydrogen causes a size reduction of the silicon nanocrystals.

  14. The challenge of accurately documenting bee species richness in agroecosystems: bee diversity in eastern apple orchards

    PubMed Central

    Russo, Laura; Park, Mia; Gibbs, Jason; Danforth, Bryan

    2015-01-01

    Bees are important pollinators of agricultural crops, and bee diversity has been shown to be closely associated with pollination, a valuable ecosystem service. Higher functional diversity and species richness of bees have been shown to lead to higher crop yield. Bees simultaneously represent a mega-diverse taxon that is extremely challenging to sample thoroughly and an important group to understand because of pollination services. We sampled bees visiting apple blossoms in 28 orchards over 6 years. We used species rarefaction analyses to test for the completeness of sampling and the relationship between species richness and sampling effort, orchard size, and percent agriculture in the surrounding landscape. We performed more than 190 h of sampling, collecting 11,219 specimens representing 104 species. Despite the sampling intensity, we captured <75% of expected species richness at more than half of the sites. For most of these, the variation in bee community composition between years was greater than among sites. Species richness was influenced by percent agriculture, orchard size, and sampling effort, but we found no factors explaining the difference between observed and expected species richness. Competition between honeybees and wild bees did not appear to be a factor, as we found no correlation between honeybee and wild bee abundance. Our study shows that the pollinator fauna of agroecosystems can be diverse and challenging to thoroughly sample. We demonstrate that there is high temporal variation in community composition and that sites vary widely in the sampling effort required to fully describe their diversity. In order to maximize pollination services provided by wild bee species, we must first accurately estimate species richness. For researchers interested in providing this estimate, we recommend multiyear studies and rarefaction analyses to quantify the gap between observed and expected species richness. PMID:26380684

  15. How large a training set is needed to develop a classifier for microarray data?

    PubMed

    Dobbin, Kevin K; Zhao, Yingdong; Simon, Richard M

    2008-01-01

    A common goal of gene expression microarray studies is the development of a classifier that can be used to divide patients into groups with different prognoses, or with different expected responses to a therapy. These types of classifiers are developed on a training set, which is the set of samples used to train a classifier. The question of how many samples are needed in the training set to produce a good classifier from high-dimensional microarray data is challenging. We present a model-based approach to determining the sample size required to adequately train a classifier. It is shown that sample size can be determined from three quantities: standardized fold change, class prevalence, and number of genes or features on the arrays. Numerous examples and important experimental design issues are discussed. The method is adapted to address ex post facto determination of whether the size of a training set used to develop a classifier was adequate. An interactive web site for performing the sample size calculations is provided. We showed that sample size calculations for classifier development from high-dimensional microarray data are feasible, discussed numerous important considerations, and presented examples.

  16. Elastic moduli in nano-size samples of amorphous solids: System size dependence

    NASA Astrophysics Data System (ADS)

    Cohen, Yossi; Procaccia, Itamar

    2012-08-01

    This letter is motivated by some recent experiments on pan-cake-shaped nano-samples of metallic glass that indicate a decline in the measured shear modulus upon decreasing the sample radius. Similar measurements on crystalline samples of the same dimensions showed a much more modest change. In this letter we offer a theory of this phenomenon; we argue that such results are generically expected for any amorphous solid, with the main effect being related to the increased contribution of surfaces with respect to the bulk when the samples get smaller. We employ exact relations between the shear modulus and the eigenvalues of the system's Hessian matrix to explore the role of surface modes in affecting the elastic moduli.

  17. Sample Size Requirements and Study Duration for Testing Main Effects and Interactions in Completely Randomized Factorial Designs When Time to Event is the Outcome

    PubMed Central

    Moser, Barry Kurt; Halabi, Susan

    2013-01-01

    In this paper we develop the methodology for designing clinical trials with any factorial arrangement when the primary outcome is time to event. We provide a matrix formulation for calculating the sample size and study duration necessary to test any effect with a pre-specified type I error rate and power. Assuming that a time to event follows an exponential distribution, we describe the relationships between the effect size, the power, and the sample size. We present examples for illustration purposes. We provide a simulation study to verify the numerical calculations of the expected number of events and the duration of the trial. The change in the power produced by a reduced number of observations or by accruing no patients to certain factorial combinations is also described. PMID:25530661

  18. Group-sequential three-arm noninferiority clinical trial designs

    PubMed Central

    Ochiai, Toshimitsu; Hamasaki, Toshimitsu; Evans, Scott R.; Asakura, Koko; Ohno, Yuko

    2016-01-01

    We discuss group-sequential three-arm noninferiority clinical trial designs that include active and placebo controls for evaluating both assay sensitivity and noninferiority. We extend two existing approaches, the fixed margin and fraction approaches, into a group-sequential setting with two decision-making frameworks. We investigate the operating characteristics including power, Type I error rate, maximum and expected sample sizes, as design factors vary. In addition, we discuss sample size recalculation and its’ impact on the power and Type I error rate via a simulation study. PMID:26892481

  19. Sample size calculation in economic evaluations.

    PubMed

    Al, M J; van Hout, B A; Michel, B C; Rutten, F F

    1998-06-01

    A simulation method is presented for sample size calculation in economic evaluations. As input the method requires: the expected difference and variance of costs and effects, their correlation, the significance level (alpha) and the power of the testing method and the maximum acceptable ratio of incremental effectiveness to incremental costs. The method is illustrated with data from two trials. The first compares primary coronary angioplasty with streptokinase in the treatment of acute myocardial infarction, in the second trial, lansoprazole is compared with omeprazole in the treatment of reflux oesophagitis. These case studies show how the various parameters influence the sample size. Given the large number of parameters that have to be specified in advance, the lack of knowledge about costs and their standard deviation, and the difficulty of specifying the maximum acceptable ratio of incremental effectiveness to incremental costs, the conclusion of the study is that from a technical point of view it is possible to perform a sample size calculation for an economic evaluation, but one should wonder how useful it is.

  20. A single test for rejecting the null hypothesis in subgroups and in the overall sample.

    PubMed

    Lin, Yunzhi; Zhou, Kefei; Ganju, Jitendra

    2017-01-01

    In clinical trials, some patient subgroups are likely to demonstrate larger effect sizes than other subgroups. For example, the effect size, or informally the benefit with treatment, is often greater in patients with a moderate condition of a disease than in those with a mild condition. A limitation of the usual method of analysis is that it does not incorporate this ordering of effect size by patient subgroup. We propose a test statistic which supplements the conventional test by including this information and simultaneously tests the null hypothesis in pre-specified subgroups and in the overall sample. It results in more power than the conventional test when the differences in effect sizes across subgroups are at least moderately large; otherwise it loses power. The method involves combining p-values from models fit to pre-specified subgroups and the overall sample in a manner that assigns greater weight to subgroups in which a larger effect size is expected. Results are presented for randomized trials with two and three subgroups.

  1. Accounting for body size deviations when reporting bone mineral density variables in children.

    PubMed

    Webber, C E; Sala, A; Barr, R D

    2009-01-01

    In a child, bone mineral density (BMD) may differ from an age-expected normal value, not only because of the presence of disease, but also because of deviations of height or weight from population averages. Appropriate adjustment for body size deviations simplifies interpretation of BMD measurements. For children, a bone mineral density (BMD) measurement is normally expressed as a Z score. Interpretation is complicated when weight or height distinctly differ from age-matched children. We develop a procedure to allow for the influence of body size deviations upon measured BMD. We examined the relation between body size deviation and spine, hip and whole body BMD deviation in 179 normal children (91 girls). Expressions were developed that allowed derivation of an expected BMD based on age, gender and body size deviation. The difference between measured and expected BMD was expressed as a HAW score (Height-, Age-, Weight-adjusted score). In a second independent sample of 26 normal children (14 girls), measured spine, total femur and whole body BMD all fell within the same single normal range after accounting for age, gender and body size deviations. When traditional Z scores and HAW scores were compared in 154 children, 17.5% showed differences of more than 1 unit and such differences were associated with height and weight deviations. For almost 1 in 5 children, body size deviations influence BMD to an extent that could alter clinical management.

  2. Combining counts and incidence data: an efficient approach for estimating the log-normal species abundance distribution and diversity indices.

    PubMed

    Bellier, Edwige; Grøtan, Vidar; Engen, Steinar; Schartau, Ann Kristin; Diserud, Ola H; Finstad, Anders G

    2012-10-01

    Obtaining accurate estimates of diversity indices is difficult because the number of species encountered in a sample increases with sampling intensity. We introduce a novel method that requires that the presence of species in a sample to be assessed while the counts of the number of individuals per species are only required for just a small part of the sample. To account for species included as incidence data in the species abundance distribution, we modify the likelihood function of the classical Poisson log-normal distribution. Using simulated community assemblages, we contrast diversity estimates based on a community sample, a subsample randomly extracted from the community sample, and a mixture sample where incidence data are added to a subsample. We show that the mixture sampling approach provides more accurate estimates than the subsample and at little extra cost. Diversity indices estimated from a freshwater zooplankton community sampled using the mixture approach show the same pattern of results as the simulation study. Our method efficiently increases the accuracy of diversity estimates and comprehension of the left tail of the species abundance distribution. We show how to choose the scale of sample size needed for a compromise between information gained, accuracy of the estimates and cost expended when assessing biological diversity. The sample size estimates are obtained from key community characteristics, such as the expected number of species in the community, the expected number of individuals in a sample and the evenness of the community.

  3. Dependence of flux-flow critical frequencies and generalized bundle sizes on distance of fluxoid traversal and fluxoid length in foil samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, J.D.; Joiner, W.C.H.

    1979-10-01

    Flux-flow noise power spectra taken on Pb/sub 80/In/sub 20/ foils as a function of the orientation of the magnetic field with respect to the sample surfaces are used to study changes in frequencies and bundle sizes as distances of fluxoid traversal and fluxoid lengths change. The results obtained for the frequency dependence of the noise spectra are entirely consistent with our model for flux motion interrupted by pinning centers, provided one makes the reasonable assumption that the distance between pinning centers which a fluxoid may encounter scales inversely with the fluxoid length. The importance of pinning centers in determining themore » noise characteristics is also demonstrated by the way in which subpulse distributions and generalized bundle sizes are altered by changes in the metallurgical structure of the sample. In unannealed samples the dependence of bundle size on magnetic field orientation is controlled by a structural anisotropy, and we find a correlation between large bundle size and the absence of short subpulse times. Annealing removes this anisotropy, and we find a stronger angular variation of bundle size than would be expected using present simplified models.« less

  4. Blinded sample size re-estimation in three-arm trials with 'gold standard' design.

    PubMed

    Mütze, Tobias; Friede, Tim

    2017-10-15

    In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Experimental strategies for imaging bioparticles with femtosecond hard X-ray pulses

    DOE PAGES

    Daurer, Benedikt J.; Okamoto, Kenta; Bielecki, Johan; ...

    2017-04-07

    This study explores the capabilities of the Coherent X-ray Imaging Instrument at the Linac Coherent Light Source to image small biological samples. The weak signal from small samples puts a significant demand on the experiment. AerosolizedOmono River virusparticles of ~40 nm in diameter were injected into the submicrometre X-ray focus at a reduced pressure. Diffraction patterns were recorded on two area detectors. The statistical nature of the measurements from many individual particles provided information about the intensity profile of the X-ray beam, phase variations in the wavefront and the size distribution of the injected particles. The results point to amore » wider than expected size distribution (from ~35 to ~300 nm in diameter). This is likely to be owing to nonvolatile contaminants from larger droplets during aerosolization and droplet evaporation. The results suggest that the concentration of nonvolatile contaminants and the ratio between the volumes of the initial droplet and the sample particles is critical in such studies. The maximum beam intensity in the focus was found to be 1.9 × 10 12photons per µm 2per pulse. The full-width of the focus at half-maximum was estimated to be 500 nm (assuming 20% beamline transmission), and this width is larger than expected. Under these conditions, the diffraction signal from a sample-sized particle remained above the average background to a resolution of 4.25 nm. Finally, the results suggest that reducing the size of the initial droplets during aerosolization is necessary to bring small particles into the scope of detailed structural studies with X-ray lasers.« less

  6. You Cannot Step Into the Same River Twice: When Power Analyses Are Optimistic.

    PubMed

    McShane, Blakeley B; Böckenholt, Ulf

    2014-11-01

    Statistical power depends on the size of the effect of interest. However, effect sizes are rarely fixed in psychological research: Study design choices, such as the operationalization of the dependent variable or the treatment manipulation, the social context, the subject pool, or the time of day, typically cause systematic variation in the effect size. Ignoring this between-study variation, as standard power formulae do, results in assessments of power that are too optimistic. Consequently, when researchers attempting replication set sample sizes using these formulae, their studies will be underpowered and will thus fail at a greater than expected rate. We illustrate this with both hypothetical examples and data on several well-studied phenomena in psychology. We provide formulae that account for between-study variation and suggest that researchers set sample sizes with respect to our generally more conservative formulae. Our formulae generalize to settings in which there are multiple effects of interest. We also introduce an easy-to-use website that implements our approach to setting sample sizes. Finally, we conclude with recommendations for quantifying between-study variation. © The Author(s) 2014.

  7. Luminescence isochron dating: a new approach using different grain sizes.

    PubMed

    Zhao, H; Li, S H

    2002-01-01

    A new approach to isochron dating is described using different sizes of quartz and K-feldspar grains. The technique can be applied to sites with time-dependent external dose rates. It is assumed that any underestimation of the equivalent dose (De) using K-feldspar is by a factor F, which is independent of grain size (90-350 microm) for a given sample. Calibration of the beta source for different grain sizes is discussed, and then the sample ages are calculated using the differences between quartz and K-feldspar De from grains of similar size. Two aeolian sediment samples from north-eastern China are used to illustrate the application of the new method. It is confirmed that the observed values of De derived using K-feldspar underestimate the expected doses (based on the quartz De) but, nevertheless, these K-feldspar De values correlate linearly with the calculated internal dose rate contribution, supporting the assumption that the underestimation factor F is independent of grain size. The isochron ages are also compared with the results obtained using quartz De and the measured external dose rates.

  8. Rule-of-thumb adjustment of sample sizes to accommodate dropouts in a two-stage analysis of repeated measurements.

    PubMed

    Overall, John E; Tonidandel, Scott; Starbuck, Robert R

    2006-01-01

    Recent contributions to the statistical literature have provided elegant model-based solutions to the problem of estimating sample sizes for testing the significance of differences in mean rates of change across repeated measures in controlled longitudinal studies with differentially correlated error and missing data due to dropouts. However, the mathematical complexity and model specificity of these solutions make them generally inaccessible to most applied researchers who actually design and undertake treatment evaluation research in psychiatry. In contrast, this article relies on a simple two-stage analysis in which dropout-weighted slope coefficients fitted to the available repeated measurements for each subject separately serve as the dependent variable for a familiar ANCOVA test of significance for differences in mean rates of change. This article is about how a sample of size that is estimated or calculated to provide desired power for testing that hypothesis without considering dropouts can be adjusted appropriately to take dropouts into account. Empirical results support the conclusion that, whatever reasonable level of power would be provided by a given sample size in the absence of dropouts, essentially the same power can be realized in the presence of dropouts simply by adding to the original dropout-free sample size the number of subjects who would be expected to drop from a sample of that original size under conditions of the proposed study.

  9. Improving the accuracy of livestock distribution estimates through spatial interpolation.

    PubMed

    Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy

    2012-11-01

    Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.

  10. A Systematic Review and Meta-Analysis Estimating the Expected Dropout Rates in Randomized Controlled Trials on Yoga Interventions.

    PubMed

    Cramer, Holger; Haller, Heidemarie; Dobos, Gustav; Lauche, Romy

    2016-01-01

    A reasonable estimation of expected dropout rates is vital for adequate sample size calculations in randomized controlled trials (RCTs). Underestimating expected dropouts rates increases the risk of false negative results while overestimating rates results in overly large sample sizes, raising both ethical and economic issues. To estimate expected dropout rates in RCTs on yoga interventions, MEDLINE/PubMed, Scopus, IndMED, and the Cochrane Library were searched through February 2014; a total of 168 RCTs were meta-analyzed. Overall dropout rate was 11.42% (95% confidence interval [CI] = 10.11%, 12.73%) in the yoga groups; rates were comparable in usual care and psychological control groups and were slightly higher in exercise control groups (rate = 14.53%; 95% CI = 11.56%, 17.50%; odds ratio = 0.82; 95% CI = 0.68, 0.98; p = 0.03). For RCTs with durations above 12 weeks, dropout rates in yoga groups increased to 15.23% (95% CI = 11.79%, 18.68%). The upper border of 95% CIs for dropout rates commonly was below 20% regardless of study origin, health condition, gender, age groups, and intervention characteristics; however, it exceeded 40% for studies on HIV patients or heterogeneous age groups. In conclusion, dropout rates can be expected to be less than 15 to 20% for most RCTs on yoga interventions. Yet dropout rates beyond 40% are possible depending on the participants' sociodemographic and health condition.

  11. A Systematic Review and Meta-Analysis Estimating the Expected Dropout Rates in Randomized Controlled Trials on Yoga Interventions

    PubMed Central

    Haller, Heidemarie; Dobos, Gustav; Lauche, Romy

    2016-01-01

    A reasonable estimation of expected dropout rates is vital for adequate sample size calculations in randomized controlled trials (RCTs). Underestimating expected dropouts rates increases the risk of false negative results while overestimating rates results in overly large sample sizes, raising both ethical and economic issues. To estimate expected dropout rates in RCTs on yoga interventions, MEDLINE/PubMed, Scopus, IndMED, and the Cochrane Library were searched through February 2014; a total of 168 RCTs were meta-analyzed. Overall dropout rate was 11.42% (95% confidence interval [CI] = 10.11%, 12.73%) in the yoga groups; rates were comparable in usual care and psychological control groups and were slightly higher in exercise control groups (rate = 14.53%; 95% CI = 11.56%, 17.50%; odds ratio = 0.82; 95% CI = 0.68, 0.98; p = 0.03). For RCTs with durations above 12 weeks, dropout rates in yoga groups increased to 15.23% (95% CI = 11.79%, 18.68%). The upper border of 95% CIs for dropout rates commonly was below 20% regardless of study origin, health condition, gender, age groups, and intervention characteristics; however, it exceeded 40% for studies on HIV patients or heterogeneous age groups. In conclusion, dropout rates can be expected to be less than 15 to 20% for most RCTs on yoga interventions. Yet dropout rates beyond 40% are possible depending on the participants' sociodemographic and health condition. PMID:27413387

  12. Poly (lactic-co-glycolic acid) particles prepared by microfluidics and conventional methods. Modulated particle size and rheology.

    PubMed

    Perez, Aurora; Hernández, Rebeca; Velasco, Diego; Voicu, Dan; Mijangos, Carmen

    2015-03-01

    Microfluidic techniques are expected to provide narrower particle size distribution than conventional methods for the preparation of poly (lactic-co-glycolic acid) (PLGA) microparticles. Besides, it is hypothesized that the particle size distribution of poly (lactic-co-glycolic acid) microparticles influences the settling behavior and rheological properties of its aqueous dispersions. For the preparation of PLGA particles, two different methods, microfluidic and conventional oil-in-water emulsification methods were employed. The particle size and particle size distribution of PLGA particles prepared by microfluidics were studied as a function of the flow rate of the organic phase while particles prepared by conventional methods were studied as a function of stirring rate. In order to study the stability and structural organization of colloidal dispersions, settling experiments and oscillatory rheological measurements were carried out on aqueous dispersions of PLGA particles with different particle size distributions. Microfluidics technique allowed the control of size and size distribution of the droplets formed in the process of emulsification. This resulted in a narrower particle size distribution for samples prepared by MF with respect to samples prepared by conventional methods. Polydisperse samples showed a larger tendency to aggregate, thus confirming the advantages of microfluidics over conventional methods, especially if biomedical applications are envisaged. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Large exchange bias effect in NiFe2O4/CoO nanocomposites

    NASA Astrophysics Data System (ADS)

    Mohan, Rajendra; Prasad Ghosh, Mritunjoy; Mukherjee, Samrat

    2018-03-01

    In this work, we report the exchange bias effect of NiFe2O4/CoO nanocomposites, synthesized via chemical co-precipitation method. Four samples of different particle size ranging from 4 nm to 31 nm were prepared with the annealing temperature varying from 200 °C to 800 °C. X-ray diffraction analysis of all the samples confirmed the presence of cubic spinel phase of Nickel ferrite along with CoO phase without trace of any impurity. Sizes of the particles were studied from transmission electron micrographs and were found to be in agreement with those estimated from x-ray diffraction. Field cooled (FC) hysteresis loops at 5 K revealed an exchange bias (HE) of 2.2 kOe for the sample heated at 200 °C which decreased with the increase of particle size. Exchange bias expectedly vanished at 300 K due to high thermal energy (kBT) and low effective surface anisotropy. M-T curves revealed a blocking temperature of 135 K for the sample with smaller particle size.

  14. Sequential sampling: a novel method in farm animal welfare assessment.

    PubMed

    Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J

    2016-02-01

    Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.

  15. Trends in Selecting Undergraduate Business Majors & International Enrollment & Expected Salaries

    ERIC Educational Resources Information Center

    Ozgur, Ceyhun; Li, Yang; Rogers, Grace

    2015-01-01

    The paper begins with a brief review of the literature and how business students choose their major in the U.S. and we list the most popular majors in the U.S. Universities. We also talk about the factors that influenced student's choice. In our next research project, we will not only use a larger sample size but also the sample will come from a…

  16. Experimental strategies for imaging bioparticles with femtosecond hard X-ray pulses

    PubMed Central

    Okamoto, Kenta; Bielecki, Johan; Maia, Filipe R. N. C.; Mühlig, Kerstin; Seibert, M. Marvin; Hantke, Max F.; Benner, W. Henry; Svenda, Martin; Ekeberg, Tomas; Loh, N. Duane; Pietrini, Alberto; Zani, Alessandro; Rath, Asawari D.; Westphal, Daniel; Kirian, Richard A.; Awel, Salah; Wiedorn, Max O.; van der Schot, Gijs; Carlsson, Gunilla H.; Hasse, Dirk; Sellberg, Jonas A.; Barty, Anton; Andreasson, Jakob; Boutet, Sébastien; Williams, Garth; Koglin, Jason; Hajdu, Janos; Larsson, Daniel S. D.

    2017-01-01

    This study explores the capabilities of the Coherent X-ray Imaging Instrument at the Linac Coherent Light Source to image small biological samples. The weak signal from small samples puts a significant demand on the experiment. Aerosolized Omono River virus particles of ∼40 nm in diameter were injected into the submicrometre X-ray focus at a reduced pressure. Diffraction patterns were recorded on two area detectors. The statistical nature of the measurements from many individual particles provided information about the intensity profile of the X-ray beam, phase variations in the wavefront and the size distribution of the injected particles. The results point to a wider than expected size distribution (from ∼35 to ∼300 nm in diameter). This is likely to be owing to nonvolatile contaminants from larger droplets during aerosolization and droplet evaporation. The results suggest that the concentration of nonvolatile contaminants and the ratio between the volumes of the initial droplet and the sample particles is critical in such studies. The maximum beam intensity in the focus was found to be 1.9 × 1012 photons per µm2 per pulse. The full-width of the focus at half-maximum was estimated to be 500 nm (assuming 20% beamline transmission), and this width is larger than expected. Under these conditions, the diffraction signal from a sample-sized particle remained above the average background to a resolution of 4.25 nm. The results suggest that reducing the size of the initial droplets during aerosolization is necessary to bring small particles into the scope of detailed structural studies with X-ray lasers. PMID:28512572

  17. Characterization of Raman Scattering in Solid Samples with Different Particle Sizes and Elucidation on the Trends of Particle Size-Dependent Intensity Variations in Relation to Changes in the Sizes of Laser Illumination and Detection Area.

    PubMed

    Duy, Pham K; Chun, Seulah; Chung, Hoeil

    2017-11-21

    We have systematically characterized Raman scatterings in solid samples with different particle sizes and investigated subsequent trends of particle size-induced intensity variations. For this purpose, both lactose powders and pellets composed of five different particle sizes were prepared. Uniquely in this study, three spectral acquisition schemes with different sizes of laser illuminations and detection windows were employed for the evaluation, since it was expected that the experimental configuration would be another factor potentially influencing the intensity of the lactose peak, along with the particle size itself. In both samples, the distribution of Raman photons became broader with the increase in particle size, as the mean free path of laser photons, the average photon travel distance between consecutive scattering locations, became longer under this situation. When the particle size was the same, the Raman photon distribution was narrower in the pellets since the individual particles were more densely packed in a given volume (the shorter mean free path). When the size of the detection window was small, the number of photons reaching the detector decreased as the photon distribution was larger. Meanwhile, a large-window detector was able to collect the widely distributed Raman photons more effectively; therefore, the trends of intensity change with the variation in particle size were dissimilar depending on the employed spectral acquisition schemes. Overall, the Monte Carlo simulation was effective at probing the photon distribution inside the samples and helped to support the experimental observations.

  18. Inferred Paternity and Male Reproductive Success in a Killer Whale (Orcinus orca) Population.

    PubMed

    Ford, Michael J; Hanson, M Bradley; Hempelmann, Jennifer A; Ayres, Katherine L; Emmons, Candice K; Schorr, Gregory S; Baird, Robin W; Balcomb, Kenneth C; Wasser, Samuel K; Parsons, Kim M; Balcomb-Bartok, Kelly

    2011-01-01

    We used data from 78 individuals at 26 microsatellite loci to infer parental and sibling relationships within a community of fish-eating ("resident") eastern North Pacific killer whales (Orcinus orca). Paternity analysis involving 15 mother/calf pairs and 8 potential fathers and whole-pedigree analysis of the entire sample produced consistent results. The variance in male reproductive success was greater than expected by chance and similar to that of other aquatic mammals. Although the number of confirmed paternities was small, reproductive success appeared to increase with male age and size. We found no evidence that males from outside this small population sired any of the sampled individuals. In contrast to previous results in a different population, many offspring were the result of matings within the same "pod" (long-term social group). Despite this pattern of breeding within social groups, we found no evidence of offspring produced by matings between close relatives, and the average internal relatedness of individuals was significantly less than expected if mating were random. The population's estimated effective size was <30 or about 1/3 of the current census size. Patterns of allele frequency variation were consistent with a population bottleneck.

  19. 78 FR 74175 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-10

    ... precision requirements or power calculations that justify the proposed sample size, the expected response...: Proposed Collection; Comment Request; Generic Clearance for the Collection of Qualitative Feedback on... Information Collection Request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback...

  20. A Naturalistic Study of Driving Behavior in Older Adults and Preclinical Alzheimer Disease.

    PubMed

    Babulal, Ganesh M; Stout, Sarah H; Benzinger, Tammie L S; Ott, Brian R; Carr, David B; Webb, Mollie; Traub, Cindy M; Addison, Aaron; Morris, John C; Warren, David K; Roe, Catherine M

    2017-01-01

    A clinical consequence of symptomatic Alzheimer's disease (AD) is impaired driving performance. However, decline in driving performance may begin in the preclinical stage of AD. We used a naturalistic driving methodology to examine differences in driving behavior over one year in a small sample of cognitively normal older adults with ( n = 10) and without ( n = 10) preclinical AD. As expected with a small sample size, there were no statistically significant differences between the two groups, but older adults with preclinical AD drove less often, were less likely to drive at night, and had fewer aggressive behaviors such as hard braking, speeding, and sudden acceleration. The sample size required to power a larger study to determine differences was calculated.

  1. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach

    PubMed Central

    Boitard, Simon; Rodríguez, Willy; Jay, Flora; Mona, Stefano; Austerlitz, Frédéric

    2016-01-01

    Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey), PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles. PMID:26943927

  2. The narrow pulse approximation and long length scale determination in xenon gas diffusion NMR studies of model porous media

    NASA Technical Reports Server (NTRS)

    Mair, R. W.; Sen, P. N.; Hurlimann, M. D.; Patz, S.; Cory, D. G.; Walsworth, R. L.

    2002-01-01

    We report a systematic study of xenon gas diffusion NMR in simple model porous media, random packs of mono-sized glass beads, and focus on three specific areas peculiar to gas-phase diffusion. These topics are: (i) diffusion of spins on the order of the pore dimensions during the application of the diffusion encoding gradient pulses in a PGSE experiment (breakdown of the narrow pulse approximation and imperfect background gradient cancellation), (ii) the ability to derive long length scale structural information, and (iii) effects of finite sample size. We find that the time-dependent diffusion coefficient, D(t), of the imbibed xenon gas at short diffusion times in small beads is significantly affected by the gas pressure. In particular, as expected, we find smaller deviations between measured D(t) and theoretical predictions as the gas pressure is increased, resulting from reduced diffusion during the application of the gradient pulse. The deviations are then completely removed when water D(t) is observed in the same samples. The use of gas also allows us to probe D(t) over a wide range of length scales and observe the long time asymptotic limit which is proportional to the inverse tortuosity of the sample, as well as the diffusion distance where this limit takes effect (approximately 1-1.5 bead diameters). The Pade approximation can be used as a reference for expected xenon D(t) data between the short and the long time limits, allowing us to explore deviations from the expected behavior at intermediate times as a result of finite sample size effects. Finally, the application of the Pade interpolation between the long and the short time asymptotic limits yields a fitted length scale (the Pade length), which is found to be approximately 0.13b for all bead packs, where b is the bead diameter. c. 2002 Elsevier Sciences (USA).

  3. The narrow pulse approximation and long length scale determination in xenon gas diffusion NMR studies of model porous media.

    PubMed

    Mair, R W; Sen, P N; Hürlimann, M D; Patz, S; Cory, D G; Walsworth, R L

    2002-06-01

    We report a systematic study of xenon gas diffusion NMR in simple model porous media, random packs of mono-sized glass beads, and focus on three specific areas peculiar to gas-phase diffusion. These topics are: (i) diffusion of spins on the order of the pore dimensions during the application of the diffusion encoding gradient pulses in a PGSE experiment (breakdown of the narrow pulse approximation and imperfect background gradient cancellation), (ii) the ability to derive long length scale structural information, and (iii) effects of finite sample size. We find that the time-dependent diffusion coefficient, D(t), of the imbibed xenon gas at short diffusion times in small beads is significantly affected by the gas pressure. In particular, as expected, we find smaller deviations between measured D(t) and theoretical predictions as the gas pressure is increased, resulting from reduced diffusion during the application of the gradient pulse. The deviations are then completely removed when water D(t) is observed in the same samples. The use of gas also allows us to probe D(t) over a wide range of length scales and observe the long time asymptotic limit which is proportional to the inverse tortuosity of the sample, as well as the diffusion distance where this limit takes effect (approximately 1-1.5 bead diameters). The Padé approximation can be used as a reference for expected xenon D(t) data between the short and the long time limits, allowing us to explore deviations from the expected behavior at intermediate times as a result of finite sample size effects. Finally, the application of the Padé interpolation between the long and the short time asymptotic limits yields a fitted length scale (the Padé length), which is found to be approximately 0.13b for all bead packs, where b is the bead diameter. c. 2002 Elsevier Sciences (USA).

  4. Selection of the effect size for sample size determination for a continuous response in a superiority clinical trial using a hybrid classical and Bayesian procedure.

    PubMed

    Ciarleglio, Maria M; Arendt, Christopher D; Peduzzi, Peter N

    2016-06-01

    When designing studies that have a continuous outcome as the primary endpoint, the hypothesized effect size ([Formula: see text]), that is, the hypothesized difference in means ([Formula: see text]) relative to the assumed variability of the endpoint ([Formula: see text]), plays an important role in sample size and power calculations. Point estimates for [Formula: see text] and [Formula: see text] are often calculated using historical data. However, the uncertainty in these estimates is rarely addressed. This article presents a hybrid classical and Bayesian procedure that formally integrates prior information on the distributions of [Formula: see text] and [Formula: see text] into the study's power calculation. Conditional expected power, which averages the traditional power curve using the prior distributions of [Formula: see text] and [Formula: see text] as the averaging weight, is used, and the value of [Formula: see text] is found that equates the prespecified frequentist power ([Formula: see text]) and the conditional expected power of the trial. This hypothesized effect size is then used in traditional sample size calculations when determining sample size for the study. The value of [Formula: see text] found using this method may be expressed as a function of the prior means of [Formula: see text] and [Formula: see text], [Formula: see text], and their prior standard deviations, [Formula: see text]. We show that the "naïve" estimate of the effect size, that is, the ratio of prior means, should be down-weighted to account for the variability in the parameters. An example is presented for designing a placebo-controlled clinical trial testing the antidepressant effect of alprazolam as monotherapy for major depression. Through this method, we are able to formally integrate prior information on the uncertainty and variability of both the treatment effect and the common standard deviation into the design of the study while maintaining a frequentist framework for the final analysis. Solving for the effect size which the study has a high probability of correctly detecting based on the available prior information on the difference [Formula: see text] and the standard deviation [Formula: see text] provides a valuable, substantiated estimate that can form the basis for discussion about the study's feasibility during the design phase. © The Author(s) 2016.

  5. Time and expected value of sample information wait for no patient.

    PubMed

    Eckermann, Simon; Willan, Andrew R

    2008-01-01

    The expected value of sample information (EVSI) from prospective trials has previously been modeled as the product of EVSI per patient, and the number of patients across the relevant time horizon less those "used up" in trials. However, this implicitly assumes the eligible patient population to which information from a trial can be applied across a time horizon are independent of time for trial accrual, follow-up and analysis. This article demonstrates that in calculating the EVSI of a trial, the number of patients who benefit from trial information should be reduced by those treated outside as well as within the trial over the time until trial evidence is updated, including time for accrual, follow-up and analysis. Accounting for time is shown to reduce the eligible patient population: 1) independent of the size of trial in allowing for time of follow-up and analysis, and 2) dependent on the size of trial for time of accrual, where the patient accrual rate is less than incidence. Consequently, the EVSI and expected net gain (ENG) at any given trial size are shown to be lower when accounting for time, with lower ENG reinforced in the case of trials undertaken while delaying decisions by additional opportunity costs of time. Appropriately accounting for time reduces the EVSI of trial design and increase opportunity costs of trials undertaken with delay, leading to lower likelihood of trialing being optimal and smaller trial designs where optimal.

  6. Efficient inference of population size histories and locus-specific mutation rates from large-sample genomic variation data.

    PubMed

    Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S

    2015-02-01

    With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.

  7. Development of a magnetic lab-on-a-chip for point-of-care sepsis diagnosis

    NASA Astrophysics Data System (ADS)

    Schotter, Joerg; Shoshi, Astrit; Brueckl, Hubert

    2009-05-01

    We present design criteria, operation principles and experimental examples of magnetic marker manipulation for our magnetic lab-on-a-chip prototype. It incorporates both magnetic sample preparation and detection by embedded GMR-type magnetoresistive sensors and is optimized for the automated point-of-care detection of four different sepsis-indicative cytokines directly from about 5 μl of whole blood. The sample volume, magnetic particle size and cytokine concentration determine the microfluidic volume, sensor size and dimensioning of the magnetic gradient field generators. By optimizing these parameters to the specific diagnostic task, best performance is expected with respect to sensitivity, analysis time and reproducibility.

  8. Terrestrial-passage theory: failing a test.

    PubMed

    Reed, Charles F; Krupinski, Elizabeth A

    2009-01-01

    Terrestrial-passage theory proposes that the 'moon' and 'sky' illusions occur because observers learn to expect an elevation-dependent transformation of visual angle. The transformation accompanies daily movement through ordinary environments of fixed-altitude objects. Celestial objects display the same visual angle at all elevations, and hence are necessarily non-conforming with the ordinary transformation. On hypothesis, observers should target angular sizes to appear greater at elevation than at horizon. However, in a sample of forty-eight observers there was no significant difference between the perceived angular size of a constellation of stars at horizon and that predicted for a specific elevation. Occurrence of the illusion was not restricted to those observers who expected angular expansion. These findings fail to support the terrestrial-passage theory of the illusion.

  9. Biostatistics Series Module 5: Determining Sample Size

    PubMed Central

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437

  10. Monitoring landscape metrics by point sampling: accuracy in estimating Shannon's diversity and edge density.

    PubMed

    Ramezani, Habib; Holm, Sören; Allard, Anna; Ståhl, Göran

    2010-05-01

    Environmental monitoring of landscapes is of increasing interest. To quantify landscape patterns, a number of metrics are used, of which Shannon's diversity, edge length, and density are studied here. As an alternative to complete mapping, point sampling was applied to estimate the metrics for already mapped landscapes selected from the National Inventory of Landscapes in Sweden (NILS). Monte-Carlo simulation was applied to study the performance of different designs. Random and systematic samplings were applied for four sample sizes and five buffer widths. The latter feature was relevant for edge length, since length was estimated through the number of points falling in buffer areas around edges. In addition, two landscape complexities were tested by applying two classification schemes with seven or 20 land cover classes to the NILS data. As expected, the root mean square error (RMSE) of the estimators decreased with increasing sample size. The estimators of both metrics were slightly biased, but the bias of Shannon's diversity estimator was shown to decrease when sample size increased. In the edge length case, an increasing buffer width resulted in larger bias due to the increased impact of boundary conditions; this effect was shown to be independent of sample size. However, we also developed adjusted estimators that eliminate the bias of the edge length estimator. The rates of decrease of RMSE with increasing sample size and buffer width were quantified by a regression model. Finally, indicative cost-accuracy relationships were derived showing that point sampling could be a competitive alternative to complete wall-to-wall mapping.

  11. 76 FR 61360 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-04

    ... calculations that justify the proposed sample size, the expected response rate, methods for assessing potential... Activities: Proposed Collection; Comment Request; Generic Clearance for the Collection of Qualitative... Collection of Qualitative Feedback on Agency Service Delivery'' to OMB for approval under the Paperwork...

  12. 77 FR 70780 - Agency Forms Undergoing Paperwork Reduction Act Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-27

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... notice. Proposed Project Generic Clearance for the Collection of Qualitative Feedback on Agency Service... Collection of Qualitative Feedback on Agency Service Delivery'' to OMB for approval under the Paperwork...

  13. 76 FR 35069 - Agency Information Collection Activities; Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-15

    ... precision requirements or power calculations that justify the proposed sample size, the expected response...; Proposed Collection; Comment Request; Generic Clearance for the Collection of Qualitative Feedback on... (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery...

  14. 78 FR 40729 - Agency Information Collection Activities; Proposed Collection; Comment Request: Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-08

    ... precision requirements or power calculations that justify the proposed sample size, the expected response... Qualitative Feedback on Agency Service Delivery AGENCY: Washington Headquarters Service (WHS), DOD. ACTION: 30... (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery...

  15. 76 FR 17861 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-31

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods...; Comment Request; Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery... Information Collection Request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback...

  16. 76 FR 24920 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-03

    ... precision requirements or power calculations that justify the proposed sample size, the expected response... Collection; Comment Request; Generic Clearance for the Collection of Qualitative Feedback on Agency Service... Information Collection Request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback...

  17. 78 FR 26033 - Agency Forms Undergoing Paperwork Reduction Act Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-03

    ... precision requirements or power calculations that justify the proposed sample size, the expected response... Collection of Qualitative Feedback on Agency Service Delivery--NEW--Epidemiology and Analysis Program Office... Clearance for the Collection of Qualitative Feedback on Agency Service Delivery'' to OMB for approval under...

  18. 77 FR 27062 - Agency Forms Undergoing Paperwork Reduction Act Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-08

    ... calculations that justify the proposed sample size, the expected response rate, methods for assessing potential... Project NIOSH Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery--NEW... Collection Request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback on Agency...

  19. 76 FR 41280 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-13

    ... proposed sample size, the expected response rate, methods for assessing potential non-response bias, the... Activities: Proposed Collection; Comment Request; Generic Clearance for the Collection of Qualitative...): ``Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery '' to OMB for...

  20. 77 FR 52708 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-30

    ... calculations that justify the proposed sample size, the expected response rate, methods for assessing potential...: Proposed Collection; Comment Request; Generic Clearance for the Collection of Qualitative Feedback on... Information Collection request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback...

  1. How Students Cope with a Procedureless Lab Exercise.

    ERIC Educational Resources Information Center

    Pickering, Miles; Crabtree, Robert H.

    1979-01-01

    Reports a study conducted to determine how students cope with a procedureless laboratory situation in physical chemistry. Students are expected to use ingenuity, determine choice of sample size, conditions, and temperature extrapolation in an experiment on measuring heat of solution of an unknown salt. (Author/SA)

  2. Dynamic relationships between body size, species richness, abundance, and energy use in a shallow marine epibenthic faunal community

    PubMed Central

    Labra, Fabio A; Hernández-Miranda, Eduardo; Quiñones, Renato A

    2015-01-01

    We study the temporal variation in the empirical relationships among body size (S), species richness (R), and abundance (A) in a shallow marine epibenthic faunal community in Coliumo Bay, Chile. We also extend previous analyses by calculating individual energy use (E) and test whether its bivariate and trivariate relationships with S and R are in agreement with expectations derived from the energetic equivalence rule. Carnivorous and scavenger species representing over 95% of sample abundance and biomass were studied. For each individual, body size (g) was measured and E was estimated following published allometric relationships. Data for each sample were tabulated into exponential body size bins, comparing species-averaged values with individual-based estimates which allow species to potentially occupy multiple size classes. For individual-based data, both the number of individuals and species across body size classes are fit by a Weibull function rather than by a power law scaling. Species richness is also a power law of the number of individuals. Energy use shows a piecewise scaling relationship with body size, with energetic equivalence holding true only for size classes above the modal abundance class. Species-based data showed either weak linear or no significant patterns, likely due to the decrease in the number of data points across body size classes. Hence, for individual-based size spectra, the SRA relationship seems to be general despite seasonal forcing and strong disturbances in Coliumo Bay. The unimodal abundance distribution results in a piecewise energy scaling relationship, with small individuals showing a positive scaling and large individuals showing energetic equivalence. Hence, strict energetic equivalence should not be expected for unimodal abundance distributions. On the other hand, while species-based data do not show unimodal SRA relationships, energy use across body size classes did not show significant trends, supporting energetic equivalence. PMID:25691966

  3. Improved variance estimation of classification performance via reduction of bias caused by small sample size.

    PubMed

    Wickenberg-Bolin, Ulrika; Göransson, Hanna; Fryknäs, Mårten; Gustafsson, Mats G; Isaksson, Anders

    2006-03-13

    Supervised learning for classification of cancer employs a set of design examples to learn how to discriminate between tumors. In practice it is crucial to confirm that the classifier is robust with good generalization performance to new examples, or at least that it performs better than random guessing. A suggested alternative is to obtain a confidence interval of the error rate using repeated design and test sets selected from available examples. However, it is known that even in the ideal situation of repeated designs and tests with completely novel samples in each cycle, a small test set size leads to a large bias in the estimate of the true variance between design sets. Therefore different methods for small sample performance estimation such as a recently proposed procedure called Repeated Random Sampling (RSS) is also expected to result in heavily biased estimates, which in turn translates into biased confidence intervals. Here we explore such biases and develop a refined algorithm called Repeated Independent Design and Test (RIDT). Our simulations reveal that repeated designs and tests based on resampling in a fixed bag of samples yield a biased variance estimate. We also demonstrate that it is possible to obtain an improved variance estimate by means of a procedure that explicitly models how this bias depends on the number of samples used for testing. For the special case of repeated designs and tests using new samples for each design and test, we present an exact analytical expression for how the expected value of the bias decreases with the size of the test set. We show that via modeling and subsequent reduction of the small sample bias, it is possible to obtain an improved estimate of the variance of classifier performance between design sets. However, the uncertainty of the variance estimate is large in the simulations performed indicating that the method in its present form cannot be directly applied to small data sets.

  4. Application-Specific Graph Sampling for Frequent Subgraph Mining and Community Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Purohit, Sumit; Choudhury, Sutanay; Holder, Lawrence B.

    Graph mining is an important data analysis methodology, but struggles as the input graph size increases. The scalability and usability challenges posed by such large graphs make it imperative to sample the input graph and reduce its size. The critical challenge in sampling is to identify the appropriate algorithm to insure the resulting analysis does not suffer heavily from the data reduction. Predicting the expected performance degradation for a given graph and sampling algorithm is also useful. In this paper, we present different sampling approaches for graph mining applications such as Frequent Subgrpah Mining (FSM), and Community Detection (CD). Wemore » explore graph metrics such as PageRank, Triangles, and Diversity to sample a graph and conclude that for heterogeneous graphs Triangles and Diversity perform better than degree based metrics. We also present two new sampling variations for targeted graph mining applications. We present empirical results to show that knowledge of the target application, along with input graph properties can be used to select the best sampling algorithm. We also conclude that performance degradation is an abrupt, rather than gradual phenomena, as the sample size decreases. We present the empirical results to show that the performance degradation follows a logistic function.« less

  5. Researchers’ Intuitions About Power in Psychological Research

    PubMed Central

    Bakker, Marjan; Hartgerink, Chris H. J.; Wicherts, Jelte M.; van der Maas, Han L. J.

    2016-01-01

    Many psychology studies are statistically underpowered. In part, this may be because many researchers rely on intuition, rules of thumb, and prior practice (along with practical considerations) to determine the number of subjects to test. In Study 1, we surveyed 291 published research psychologists and found large discrepancies between their reports of their preferred amount of power and the actual power of their studies (calculated from their reported typical cell size, typical effect size, and acceptable alpha). Furthermore, in Study 2, 89% of the 214 respondents overestimated the power of specific research designs with a small expected effect size, and 95% underestimated the sample size needed to obtain .80 power for detecting a small effect. Neither researchers’ experience nor their knowledge predicted the bias in their self-reported power intuitions. Because many respondents reported that they based their sample sizes on rules of thumb or common practice in the field, we recommend that researchers conduct and report formal power analyses for their studies. PMID:27354203

  6. Researchers' Intuitions About Power in Psychological Research.

    PubMed

    Bakker, Marjan; Hartgerink, Chris H J; Wicherts, Jelte M; van der Maas, Han L J

    2016-08-01

    Many psychology studies are statistically underpowered. In part, this may be because many researchers rely on intuition, rules of thumb, and prior practice (along with practical considerations) to determine the number of subjects to test. In Study 1, we surveyed 291 published research psychologists and found large discrepancies between their reports of their preferred amount of power and the actual power of their studies (calculated from their reported typical cell size, typical effect size, and acceptable alpha). Furthermore, in Study 2, 89% of the 214 respondents overestimated the power of specific research designs with a small expected effect size, and 95% underestimated the sample size needed to obtain .80 power for detecting a small effect. Neither researchers' experience nor their knowledge predicted the bias in their self-reported power intuitions. Because many respondents reported that they based their sample sizes on rules of thumb or common practice in the field, we recommend that researchers conduct and report formal power analyses for their studies. © The Author(s) 2016.

  7. The Power of Percent

    ERIC Educational Resources Information Center

    Watson, Jane; English, Lyn

    2013-01-01

    Jane Watson and Lyn English use a chance activity exploring expectation and variation with coin tossing to highlight the importance of understanding the part-whole relationship embodied in percentage and its power to measure and compare for different wholes, in this case different sample sizes. The purpose of this article is to raise awareness of…

  8. 76 FR 36139 - Agency Information Collection Activities: Submission for OMB Review; Comment Request; Generic...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-21

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... for the Collection of Qualitative Feedback on Agency Service Delivery AGENCY: Federal Emergency...): ``Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery'' to the Office of...

  9. 76 FR 29763 - Agency Information Collection Activities; Submission for Office of Management and Budget Review...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-23

    ... precision requirements or power calculations that justify the proposed sample size, the expected response... Request; Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery AGENCY... ``Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery.'' Also include...

  10. 77 FR 63798 - Agency Information Collection Activities: Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-17

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... Clearance for the Collection of Qualitative Feedback on the Service Delivery of the Consumer Financial... title, ``Generic Clearance for the Collection of Qualitative Feedback on the Service Delivery of the...

  11. 76 FR 23536 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-27

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... Qualitative Feedback on Agency Service Delivery April 22, 2011. AGENCY: Department of Agriculture (USDA... Qualitative Feedback on Agency Service Delivery'' to OMB for approval under the Paperwork Reduction Act (PRA...

  12. 76 FR 25693 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-05

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... Collection; Comment Request; Generic Clearance for the Collection of Qualitative Feedback on Agency Service... Collection Request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback on Agency...

  13. 76 FR 37825 - Agency Information Collection Activities; Generic Clearance for the Collection of Qualitative...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-28

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... Activities; Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery AGENCY: U...): ``Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery '' to OMB for...

  14. 76 FR 13020 - Agency Information Collection Activities: Comment Request; Generic Clearance for the Collection...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-09

    ... precision requirements or power calculations that justify the proposed sample size, the expected response... Clearance for the Collection of Qualitative Feedback on Agency Service Delivery AGENCY: Department of... Collection Request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback on Agency...

  15. 76 FR 79702 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-22

    ... calculations that justify the proposed sample size, the expected response rate, methods for assessing potential... Qualitative Feedback on Agency Service Delivery AGENCY: National Institute of Mental Health (NIMH), HHS... Collection Request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback on Agency...

  16. 77 FR 28893 - Intent To Request Approval From OMB of One New Public Collection of Information: Generic...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-16

    ... that justify the proposed sample size, the expected response rate, methods for assessing potential non... Qualitative Feedback on Agency Service Delivery AGENCY: Transportation Security Administration, DHS. ACTION... collection activity provides a means to gather qualitative customer and stakeholder feedback in an efficient...

  17. 76 FR 13977 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-15

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... of Qualitative Feedback on Agency Service Delivery AGENCY: Office of the Secretary/Office of the...): ``Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery'' to OMB for...

  18. 77 FR 72361 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-05

    ... calculations that justify the proposed sample size, the expected response rate, methods for assessing potential... Qualitative Feedback on Agency Service Delivery SUMMARY: As part of a Federal Government-wide effort to... Information Collection Request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback...

  19. 76 FR 19826 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-08

    ... calculations that justify the proposed sample size, the expected response rate, methods for assessing potential... Request; Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery AGENCY... (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery...

  20. 76 FR 10939 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-28

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... of Qualitative Feedback on Agency Service Delivery AGENCY: Federal Railroad Administration (FRA... Qualitative Feedback on Agency Service Delivery'' to OMB for approval under the Paperwork Reduction Act (PRA...

  1. 78 FR 44099 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-23

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... Collection; Comment Request; Generic Clearance for the Collection of Qualitative Feedback on Agency Service... Qualitative Feedback on Agency Service Delivery'' for approval under the Paperwork Reduction Act (PRA) (44 U.S...

  2. 75 FR 80542 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-22

    ... calculations that justify the proposed sample size, the expected response rate, methods for assessing potential...; Comment Request; Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery... Collection Request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback on Agency...

  3. 76 FR 31383 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-31

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods...; Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery AGENCY: Peace... Qualitative Feedback on Agency Service Delivery '' to OMB for approval under the Paperwork Reduction Act (PRA...

  4. 78 FR 23755 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-22

    ... justify the proposed sample size, the expected response rate, methods for assessing potential non-response... Collection; Comment Request; Generic Clearance for the Collection of Qualitative Feedback on Agency Service... Qualitative Feedback on Agency Service Delivery '' for approval under the Paperwork Reduction Act (PRA) (44 U...

  5. 76 FR 13019 - Agency Information Collection Activities: Comment Request; Generic Clearance for the Collection...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-09

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... Clearance for the Collection of Qualitative Feedback on Agency Service Delivery AGENCY: Department of...): ``Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery '' to OMB for...

  6. 76 FR 55398 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-07

    ... precision requirements or power calculations that justify the proposed sample size, the expected response... Qualitative Feedback on Agency Service Delivery AGENCY: National Institutes of Health, Eunice Kennedy Shriver...): ``Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery '' to OMB for...

  7. 76 FR 44938 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-27

    ... calculations that justify the proposed sample size, the expected response rate, methods for assessing potential... Qualitative Feedback on Agency Service Delivery: National Cancer Center (NCI) ACTION: 30-Day notice of... Collection Request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback on Agency...

  8. 76 FR 21800 - Agency Information Collection Activities: Submission for OMB Review; Comment Request; Generic...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-18

    ... precision requirements or power calculations that justify the proposed sample size, the expected response... Activities: Submission for OMB Review; Comment Request; Generic Clearance for the Collection of Qualitative... Request (Generic ICR): ``Generic Clearance for the Collection of Qualitative Feedback on Agency Service...

  9. 76 FR 20967 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-14

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... Request; Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery AGENCY: U... Clearance for the Collection of Qualitative Feedback on Agency Service Delivery'' to OMB for approval under...

  10. 76 FR 38355 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-30

    ... calculations that justify the proposed sample size, the expected response rate, methods for assessing potential... of Qualitative Feedback on Agency Service Delivery AGENCY: Architectural and Transportation Barriers...: ``Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery'' to the Office of...

  11. 76 FR 22920 - Agency Information Collection Activities: Proposed Collection; Comment Request; DOL Generic...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-25

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods... Collection; Comment Request; DOL Generic Clearance for the Collection of Qualitative Feedback on Agency... of Qualitative Feedback on Agency Service Delivery'' to the Office of Management and Budget (OMB) for...

  12. Min and Max Exponential Extreme Interval Values and Statistics

    ERIC Educational Resources Information Center

    Jance, Marsha; Thomopoulos, Nick

    2009-01-01

    The extreme interval values and statistics (expected value, median, mode, standard deviation, and coefficient of variation) for the smallest (min) and largest (max) values of exponentially distributed variables with parameter ? = 1 are examined for different observation (sample) sizes. An extreme interval value g[subscript a] is defined as a…

  13. Estimation of within-stratum variance for sample allocation: Foreign commodity production forecasting

    NASA Technical Reports Server (NTRS)

    Chhikara, R. S.; Perry, C. R., Jr. (Principal Investigator)

    1980-01-01

    The problem of determining the stratum variances required for an optimum sample allocation for remotely sensed crop surveys is investigated with emphasis on an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical statistics is developed for obtaining initial estimates of stratum variances. The procedure is applied to variance for wheat in the U.S. Great Plains and is evaluated based on the numerical results obtained. It is shown that the proposed technique is viable and performs satisfactorily with the use of a conservative value (smaller than the expected value) for the field size and with the use of crop statistics from the small political division level.

  14. Dimensions of design space: a decision-theoretic approach to optimal research design.

    PubMed

    Conti, Stefano; Claxton, Karl

    2009-01-01

    Bayesian decision theory can be used not only to establish the optimal sample size and its allocation in a single clinical study but also to identify an optimal portfolio of research combining different types of study design. Within a single study, the highest societal payoff to proposed research is achieved when its sample sizes and allocation between available treatment options are chosen to maximize the expected net benefit of sampling (ENBS). Where a number of different types of study informing different parameters in the decision problem could be conducted, the simultaneous estimation of ENBS across all dimensions of the design space is required to identify the optimal sample sizes and allocations within such a research portfolio. This is illustrated through a simple example of a decision model of zanamivir for the treatment of influenza. The possible study designs include: 1) a single trial of all the parameters, 2) a clinical trial providing evidence only on clinical endpoints, 3) an epidemiological study of natural history of disease, and 4) a survey of quality of life. The possible combinations, samples sizes, and allocation between trial arms are evaluated over a range of cost-effectiveness thresholds. The computational challenges are addressed by implementing optimization algorithms to search the ENBS surface more efficiently over such large dimensions.

  15. [A comparison of convenience sampling and purposive sampling].

    PubMed

    Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien

    2014-06-01

    Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.

  16. Observational studies of patients in the emergency department: a comparison of 4 sampling methods.

    PubMed

    Valley, Morgan A; Heard, Kennon J; Ginde, Adit A; Lezotte, Dennis C; Lowenstein, Steven R

    2012-08-01

    We evaluate the ability of 4 sampling methods to generate representative samples of the emergency department (ED) population. We analyzed the electronic records of 21,662 consecutive patient visits at an urban, academic ED. From this population, we simulated different models of study recruitment in the ED by using 2 sample sizes (n=200 and n=400) and 4 sampling methods: true random, random 4-hour time blocks by exact sample size, random 4-hour time blocks by a predetermined number of blocks, and convenience or "business hours." For each method and sample size, we obtained 1,000 samples from the population. Using χ(2) tests, we measured the number of statistically significant differences between the sample and the population for 8 variables (age, sex, race/ethnicity, language, triage acuity, arrival mode, disposition, and payer source). Then, for each variable, method, and sample size, we compared the proportion of the 1,000 samples that differed from the overall ED population to the expected proportion (5%). Only the true random samples represented the population with respect to sex, race/ethnicity, triage acuity, mode of arrival, language, and payer source in at least 95% of the samples. Patient samples obtained using random 4-hour time blocks and business hours sampling systematically differed from the overall ED patient population for several important demographic and clinical variables. However, the magnitude of these differences was not large. Common sampling strategies selected for ED-based studies may affect parameter estimates for several representative population variables. However, the potential for bias for these variables appears small. Copyright © 2012. Published by Mosby, Inc.

  17. Gravel Mobility in a High Sand Content Riverbed

    NASA Astrophysics Data System (ADS)

    Haschenburger, J. K.

    2017-12-01

    In sand-gravel channels, sand may modify gravel transport by changing conditions of entrainment and promoting longer displacements or gravel may inhibit sand transport if concentrated into distinct deposits, which restrict sand supply with consequences for migrating bedform size or form. This study reports on gravel mobility in the lower San Antonio River, Texas, where gravel content in the bed material ranges from about 1% to more than 20%. Sediment transport observations were collected at three U.S. Geological Survey gauging stations by deploying a Helley-Smith sampler with a 0.2 mm mesh bag from which transport rates and mobile grain sizes were determined. The flow rates sampled translate into an annual exceedance expectation from 0.2% to 98%. Gravel transport rates are generally two orders of magnitude smaller than the rates of sand transport. However, the finest gravels are transported at rates on the same order of magnitude as the coarsest sands. At all sites, the 2 and 2.8 mm fractions are transported at the lowest flow rate sampled, suggesting mobility for at least 38% to as much as 98% of the year. Fractions as large as 8 mm are mobilized at flow rates that are expected between 25% and 53% of the year. The largest fractions captured in the sampling (16 to 32 mm) require flows closer to bankfull conditions that occur no more than 0.8% of the year. Results document that some gravel sizes can be frequently transported in low gradient riverbeds with high sand content.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lieman-Sifry, Jesse; Hughes, A. Meredith; Flaherty, Kevin M.

    We present a CO(2-1) and 1240 μ m continuum survey of 23 debris disks with spectral types B9-G1, observed at an angular resolution of 0.″5–1″ with the Atacama Large Millimeter/Submillimeter Array (ALMA). The sample was selected for large infrared excess and age ∼10 Myr, to characterize the prevalence of molecular gas emission in young debris disks. We identify three CO-rich debris disks, plus two additional tentative (3 σ) CO detections. Twenty disks were detected in the continuum at the >3 σ level. For the 12 disks in the sample that are spatially resolved by our observations, we perform an independentmore » analysis of the interferometric continuum visibilities to constrain the basic dust disk geometry, as well as a simultaneous analysis of the visibilities and broadband spectral energy distribution to constrain the characteristic grain size and disk mass. The gas-rich debris disks exhibit preferentially larger outer radii in their dust disks, and a higher prevalence of characteristic grain sizes smaller than the blowout size. The gas-rich disks do not exhibit preferentially larger dust masses, contrary to expectations for a scenario in which a higher cometary destruction rate would be expected to result in a larger mass of both CO and dust. The three debris disks in our sample with strong CO detections are all around A stars: the conditions in disks around intermediate-mass stars appear to be the most conducive to the survival or formation of CO.« less

  19. Sample path analysis of contribution and reward in cooperative groups.

    PubMed

    Toyoizumi, Hiroshi

    2009-02-07

    Explaining cooperative behavior is one of the major challenges in both biology and human society. The individual reward in cooperative group depends on how we share the rewards in the group. Thus, the group size dynamics in a cooperative group and reward-allocation rule seem essential to evaluate the emergence of cooperative groups. We apply a sample path-based analysis called an extension of Little's formula to general cooperative group. We show that the expected reward is insensitive to the specific reward-allocation rule and probabilistic structure of group dynamics, and the simple productivity condition guarantees the expected reward to be larger than the average contribution. As an example, we take social queues to see the insensitivity result in detail.

  20. The distribution of dragonfly larvae in a South Carolina stream: relationships with sediment type, body size, and the presence of other larvae.

    PubMed

    Worthen, Wade B; Horacek, Henry Joseph

    2015-01-01

    Dragonfly larvae were sampled in Little Creek, Greenville, SC. The distributions of five common species were described relative to sediment type, body size, and the presence of other larvae. In total, 337 quadrats (1 m by 0.5 m) were sampled by kick seine. For each quadrat, the substrate was classified as sand, sand-cobble mix, cobble, coarse, or rock, and water depth and distance from bank were measured. Larvae were identified to species, and the lengths of the body, head, and metafemur were measured. Species were distributed differently across sediment types: sanddragons, Progomphus obscurus (Rambur) (Odonata: Gomphidae), were common in sand; twin-spotted spiketails, Cordulegaster maculata Selys (Odonata: Cordulegastridae), preferred a sand-cobble mix; Maine snaketails, Ophiogomphus mainensis Packard (Odonata: Gomphidae), preferred cobble and coarse sediments; fawn darners, Boyeria vinosa (Say) (Odonata: Aeshnidae), preferred coarse sediments; and Eastern least clubtails, Stylogomphus albistylus (Hagen) (Odonata: Gomphidae), preferred coarse and rock sediments. P. obscurus and C. maculata co-occurred more frequently than expected by chance, as did O. mainensis, B. vinosa, and S. albistylus. Mean size varied among species, and species preferences contributed to differences in mean size across sediment types. There were significant negative associations among larval size classes: small larvae (<12 mm) occurred less frequently with large larvae (>15 mm) than expected by chance, and large larvae were alone in quadrats more frequently than other size classes. Species may select habitats at a large scale based on sediment type and their functional morphology, but small scale distributions are consistent with competitive displacement or intraguild predation. © The Author 2015. Published by Oxford University Press on behalf of the Entomological Society of America.

  1. Expectations and Support for Scholarly Activity in Schools of Business.

    ERIC Educational Resources Information Center

    Bohrer, Paul; Dolphin, Robert, Jr.

    1985-01-01

    Addresses issues relating to scholarship productivity and examines these issues with consideration given to the size and the accreditation status of the business schools sampled. First, how important is scholarly activity within an institution for a variety of personnel decisions? Second, what is the relative importance of various types of…

  2. Distances of Dwarf Carbon Stars

    NASA Astrophysics Data System (ADS)

    Harris, Hugh C.; Dahn, Conard C.; Subasavage, John P.; Munn, Jeffrey A.; Canzian, Blaise J.; Levine, Stephen E.; Monet, Alice B.; Pier, Jeffrey R.; Stone, Ronald C.; Tilleman, Trudy M.; Hartkopf, William I.

    2018-06-01

    Parallaxes are presented for a sample of 20 nearby dwarf carbon stars. The inferred luminosities cover almost two orders of magnitude. Their absolute magnitudes and tangential velocities confirm prior expectations that some originate in the Galactic disk, although more than half of this sample are halo stars. Three stars are found to be astrometric binaries, and orbital elements are determined; their semimajor axes are 1–3 au, consistent with the size of an AGB mass-transfer donor star.

  3. A multi-scale study of Orthoptera species richness and human population size controlling for sampling effort

    NASA Astrophysics Data System (ADS)

    Cantarello, Elena; Steck, Claude E.; Fontana, Paolo; Fontaneto, Diego; Marini, Lorenzo; Pautasso, Marco

    2010-03-01

    Recent large-scale studies have shown that biodiversity-rich regions also tend to be densely populated areas. The most obvious explanation is that biodiversity and human beings tend to match the distribution of energy availability, environmental stability and/or habitat heterogeneity. However, the species-people correlation can also be an artefact, as more populated regions could show more species because of a more thorough sampling. Few studies have tested this sampling bias hypothesis. Using a newly collated dataset, we studied whether Orthoptera species richness is related to human population size in Italy’s regions (average area 15,000 km2) and provinces (2,900 km2). As expected, the observed number of species increases significantly with increasing human population size for both grain sizes, although the proportion of variance explained is minimal at the provincial level. However, variations in observed Orthoptera species richness are primarily associated with the available number of records, which is in turn well correlated with human population size (at least at the regional level). Estimated Orthoptera species richness (Chao2 and Jackknife) also increases with human population size both for regions and provinces. Both for regions and provinces, this increase is not significant when controlling for variation in area and number of records. Our study confirms the hypothesis that broad-scale human population-biodiversity correlations can in some cases be artefactual. More systematic sampling of less studied taxa such as invertebrates is necessary to ascertain whether biogeographical patterns persist when sampling effort is kept constant or included in models.

  4. How often should we expect to be wrong? Statistical power, P values, and the expected prevalence of false discoveries.

    PubMed

    Marino, Michael J

    2018-05-01

    There is a clear perception in the literature that there is a crisis in reproducibility in the biomedical sciences. Many underlying factors contributing to the prevalence of irreproducible results have been highlighted with a focus on poor design and execution of experiments along with the misuse of statistics. While these factors certainly contribute to irreproducibility, relatively little attention outside of the specialized statistical literature has focused on the expected prevalence of false discoveries under idealized circumstances. In other words, when everything is done correctly, how often should we expect to be wrong? Using a simple simulation of an idealized experiment, it is possible to show the central role of sample size and the related quantity of statistical power in determining the false discovery rate, and in accurate estimation of effect size. According to our calculations, based on current practice many subfields of biomedical science may expect their discoveries to be false at least 25% of the time, and the only viable course to correct this is to require the reporting of statistical power and a minimum of 80% power (1 - β = 0.80) for all studies. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Evaluation of actual vs expected photodynamic therapy spot size.

    PubMed

    Ranchod, Tushar M; Brucker, Alexander J; Liu, Chengcheng; Cukras, Catherine A; Hopkins, Tim B; Ying, Gui-Shuang

    2009-05-01

    To determine the accuracy of the photodynamic therapy (PDT) laser spot size on the retina as generated by 2 Food and Drug Administration (FDA)-approved lasers. Prospective observational case series. Fundus photographs were taken of 1 eye of each of 10 subjects with the WinStation 4000 fundus photography system (OIS; Ophthalmic Imaging Systems, Sacramento, California, USA); disc size was calculated using OIS software. Slit-lamp photographs were taken of the PDT laser spot focused on the retina adjacent to the optic disc, using various spot sizes in combination with 3 different contact lenses and 2 different lasers. Spot size at the retina was determined by measuring the ratio of disc diameter to spot diameter in Adobe Photoshop (San Jose, California, USA) and applying this ratio to the OIS disc measurements. Spot size at the retina averaged 87% of expected spot size for the Coherent Opal laser (Coherent Inc, Santa Clara, California, USA) and 104% of expected spot size for the Zeiss Visulas laser (Carl Zeiss Meditec Inc, Dublin, California, USA)(P = .002). Multivariate analysis demonstrated that percentage of expected spot size decreased with larger spot diameter (P = .01 for Coherent laser; P = .02 for Zeiss laser). PDT spot size at the retina appears to be consistently smaller than expected for the Coherent laser while the spot size was consistently within 10% of expected size for the Zeiss laser. The deviation from expected size increased with larger spot size using the Coherent laser.

  6. The role of intramolecular nonbonded interaction and angle sampling in single-step free energy perturbation

    NASA Astrophysics Data System (ADS)

    Chiang, Ying-Chih; Pang, Yui Tik; Wang, Yi

    2016-12-01

    Single-step free energy perturbation (sFEP) has often been proposed as an efficient tool for a quick free energy scan due to its straightforward protocol and the ability to recycle an existing molecular dynamics trajectory for free energy calculations. Although sFEP is expected to fail when the sampling of a system is inefficient, it is often expected to hold for an alchemical transformation between ligands with a moderate difference in their sizes, e.g., transforming a benzene into an ethylbenzene. Yet, exceptions were observed in calculations for anisole and methylaniline, which have similar physical sizes as ethylbenzene. In this study, we show that such exceptions arise from the sampling inefficiency on an unexpected rigid degree of freedom, namely, the bond angle θ. The distributions of θ differ dramatically between two end states of a sFEP calculation, i.e., the conformation of the ligand changes significantly during the alchemical transformation process. Our investigation also reveals the interrelation between the ligand conformation and the intramolecular nonbonded interactions. This knowledge suggests a best combination of the ghost ligand potential and the dual topology setting, which improves the accuracy in a single reference sFEP calculation by bringing down its error from around 5kBT to kBT.

  7. Potential Reporting Bias in Neuroimaging Studies of Sex Differences.

    PubMed

    David, Sean P; Naudet, Florian; Laude, Jennifer; Radua, Joaquim; Fusar-Poli, Paolo; Chu, Isabella; Stefanick, Marcia L; Ioannidis, John P A

    2018-04-17

    Numerous functional magnetic resonance imaging (fMRI) studies have reported sex differences. To empirically evaluate for evidence of excessive significance bias in this literature, we searched for published fMRI studies of human brain to evaluate sex differences, regardless of the topic investigated, in Medline and Scopus over 10 years. We analyzed the prevalence of conclusions in favor of sex differences and the correlation between study sample sizes and number of significant foci identified. In the absence of bias, larger studies (better powered) should identify a larger number of significant foci. Across 179 papers, median sample size was n = 32 (interquartile range 23-47.5). A median of 5 foci related to sex differences were reported (interquartile range, 2-9.5). Few articles (n = 2) had titles focused on no differences or on similarities (n = 3) between sexes. Overall, 158 papers (88%) reached "positive" conclusions in their abstract and presented some foci related to sex differences. There was no statistically significant relationship between sample size and the number of foci (-0.048% increase for every 10 participants, p = 0.63). The extremely high prevalence of "positive" results and the lack of the expected relationship between sample size and the number of discovered foci reflect probable reporting bias and excess significance bias in this literature.

  8. Optimal Inspection of Imports to Prevent Invasive Pest Introduction.

    PubMed

    Chen, Cuicui; Epanchin-Niell, Rebecca S; Haight, Robert G

    2018-03-01

    The United States imports more than 1 billion live plants annually-an important and growing pathway for introduction of damaging nonnative invertebrates and pathogens. Inspection of imports is one safeguard for reducing pest introductions, but capacity constraints limit inspection effort. We develop an optimal sampling strategy to minimize the costs of pest introductions from trade by posing inspection as an acceptance sampling problem that incorporates key features of the decision context, including (i) simultaneous inspection of many heterogeneous lots, (ii) a lot-specific sampling effort, (iii) a budget constraint that limits total inspection effort, (iv) inspection error, and (v) an objective of minimizing cost from accepted defective units. We derive a formula for expected number of accepted infested units (expected slippage) given lot size, sample size, infestation rate, and detection rate, and we formulate and analyze the inspector's optimization problem of allocating a sampling budget among incoming lots to minimize the cost of slippage. We conduct an empirical analysis of live plant inspection, including estimation of plant infestation rates from historical data, and find that inspections optimally target the largest lots with the highest plant infestation rates, leaving some lots unsampled. We also consider that USDA-APHIS, which administers inspections, may want to continue inspecting all lots at a baseline level; we find that allocating any additional capacity, beyond a comprehensive baseline inspection, to the largest lots with the highest infestation rates allows inspectors to meet the dual goals of minimizing the costs of slippage and maintaining baseline sampling without substantial compromise. © 2017 Society for Risk Analysis.

  9. [Sequential sampling plans to Orthezia praelonga Douglas (Hemiptera: Sternorrhyncha, Ortheziidae) in citrus].

    PubMed

    Costa, Marilia G; Barbosa, José C; Yamamoto, Pedro T

    2007-01-01

    The sequential sampling is characterized by using samples of variable sizes, and has the advantage of reducing sampling time and costs if compared to fixed-size sampling. To introduce an adequate management for orthezia, sequential sampling plans were developed for orchards under low and high infestation. Data were collected in Matão, SP, in commercial stands of the orange variety 'Pêra Rio', at five, nine and 15 years of age. Twenty samplings were performed in the whole area of each stand by observing the presence or absence of scales on plants, being plots comprised of ten plants. After observing that in all of the three stands the scale population was distributed according to the contagious model, fitting the Negative Binomial Distribution in most samplings, two sequential sampling plans were constructed according to the Sequential Likelihood Ratio Test (SLRT). To construct these plans an economic threshold of 2% was adopted and the type I and II error probabilities were fixed in alpha = beta = 0.10. Results showed that the maximum numbers of samples expected to determine control need were 172 and 76 samples for stands with low and high infestation, respectively.

  10. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    PubMed

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  11. Improving the quality of biomarker discovery research: the right samples and enough of them.

    PubMed

    Pepe, Margaret S; Li, Christopher I; Feng, Ziding

    2015-06-01

    Biomarker discovery research has yielded few biomarkers that validate for clinical use. A contributing factor may be poor study designs. The goal in discovery research is to identify a subset of potentially useful markers from a large set of candidates assayed on case and control samples. We recommend the PRoBE design for selecting samples. We propose sample size calculations that require specifying: (i) a definition for biomarker performance; (ii) the proportion of useful markers the study should identify (Discovery Power); and (iii) the tolerable number of useless markers amongst those identified (False Leads Expected, FLE). We apply the methodology to a study of 9,000 candidate biomarkers for risk of colon cancer recurrence where a useful biomarker has positive predictive value ≥ 30%. We find that 40 patients with recurrence and 160 without recurrence suffice to filter out 98% of useless markers (2% FLE) while identifying 95% of useful biomarkers (95% Discovery Power). Alternative methods for sample size calculation required more assumptions. Biomarker discovery research should utilize quality biospecimen repositories and include sample sizes that enable markers meeting prespecified performance characteristics for well-defined clinical applications to be identified. The scientific rigor of discovery research should be improved. ©2015 American Association for Cancer Research.

  12. Non-invasive genetic censusing and monitoring of primate populations.

    PubMed

    Arandjelovic, Mimi; Vigilant, Linda

    2018-03-01

    Knowing the density or abundance of primate populations is essential for their conservation management and contextualizing socio-demographic and behavioral observations. When direct counts of animals are not possible, genetic analysis of non-invasive samples collected from wildlife populations allows estimates of population size with higher accuracy and precision than is possible using indirect signs. Furthermore, in contrast to traditional indirect survey methods, prolonged or periodic genetic sampling across months or years enables inference of group membership, movement, dynamics, and some kin relationships. Data may also be used to estimate sex ratios, sex differences in dispersal distances, and detect gene flow among locations. Recent advances in capture-recapture models have further improved the precision of population estimates derived from non-invasive samples. Simulations using these methods have shown that the confidence interval of point estimates includes the true population size when assumptions of the models are met, and therefore this range of population size minima and maxima should be emphasized in population monitoring studies. Innovations such as the use of sniffer dogs or anti-poaching patrols for sample collection are important to ensure adequate sampling, and the expected development of efficient and cost-effective genotyping by sequencing methods for DNAs derived from non-invasive samples will automate and speed analyses. © 2018 Wiley Periodicals, Inc.

  13. Effects of Group Size and Lack of Sphericity on the Recovery of Clusters in K-means Cluster Analysis.

    PubMed

    Craen, Saskia de; Commandeur, Jacques J F; Frank, Laurence E; Heiser, Willem J

    2006-06-01

    K-means cluster analysis is known for its tendency to produce spherical and equally sized clusters. To assess the magnitude of these effects, a simulation study was conducted, in which populations were created with varying departures from sphericity and group sizes. An analysis of the recovery of clusters in the samples taken from these populations showed a significant effect of lack of sphericity and group size. This effect was, however, not as large as expected, with still a recovery index of more than 0.5 in the "worst case scenario." An interaction effect between the two data aspects was also found. The decreasing trend in the recovery of clusters for increasing departures from sphericity is different for equal and unequal group sizes.

  14. Real-time dynamic modelling for the design of a cluster-randomized phase 3 Ebola vaccine trial in Sierra Leone.

    PubMed

    Camacho, A; Eggo, R M; Goeyvaerts, N; Vandebosch, A; Mogg, R; Funk, S; Kucharski, A J; Watson, C H; Vangeneugden, T; Edmunds, W J

    2017-01-23

    Declining incidence and spatial heterogeneity complicated the design of phase 3 Ebola vaccine trials during the tail of the 2013-16 Ebola virus disease (EVD) epidemic in West Africa. Mathematical models can provide forecasts of expected incidence through time and can account for both vaccine efficacy in participants and effectiveness in populations. Determining expected disease incidence was critical to calculating power and determining trial sample size. In real-time, we fitted, forecasted, and simulated a proposed phase 3 cluster-randomized vaccine trial for a prime-boost EVD vaccine in three candidate regions in Sierra Leone. The aim was to forecast trial feasibility in these areas through time and guide study design planning. EVD incidence was highly variable during the epidemic, especially in the declining phase. Delays in trial start date were expected to greatly reduce the ability to discern an effect, particularly as a trial with an effective vaccine would cause the epidemic to go extinct more quickly in the vaccine arm. Real-time updates of the model allowed decision-makers to determine how trial feasibility changed with time. This analysis was useful for vaccine trial planning because we simulated effectiveness as well as efficacy, which is possible with a dynamic transmission model. It contributed to decisions on choice of trial location and feasibility of the trial. Transmission models should be utilised as early as possible in the design process to provide mechanistic estimates of expected incidence, with which decisions about sample size, location, timing, and feasibility can be determined. Copyright © 2016. Published by Elsevier Ltd.

  15. Emission characteristics and chemical components of size-segregated particulate matter in iron and steel industry

    NASA Astrophysics Data System (ADS)

    Jia, Jia; Cheng, Shuiyuan; Yao, Sen; Xu, Tiebing; Zhang, Tingting; Ma, Yuetao; Wang, Hongliang; Duan, Wenjiao

    2018-06-01

    As one of the highest energy consumption and pollution industries, the iron and steel industry is regarded as a most important source of particulate matter emission. In this study, chemical components of size-segregated particulate matters (PM) emitted from different manufacturing units in iron and steel industry were sampled by a comprehensive sampling system. Results showed that the average particle mass concentration was highest in sintering process, followed by puddling, steelmaking and then rolling processes. PM samples were divided into eight size fractions for testing the chemical components, SO42- and NH4+ distributed more into fine particles while most of the Ca2+ was concentrated in coarse particles, the size distribution of mineral elements depended on the raw materials applied. Moreover, local database with PM chemical source profiles of iron and steel industry were built and applied in CMAQ modeling for simulating SO42- and NO3- concentration, results showed that the accuracy of model simulation improved with local chemical source profiles compared to the SPECIATE database. The results gained from this study are expected to be helpful to understand the components of PM in iron and steel industry and contribute to the source apportionment researches.

  16. 77 FR 47590 - Notice of Request for a Revision to and Extension of Approval of an Information Collection...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-09

    ... requirements or power calculations that justify the proposed sample size, the expected response rate, methods...] Notice of Request for a Revision to and Extension of Approval of an Information Collection; Qualitative... associated with qualitative customer and stakeholder feedback on service delivery by the Animal and Plant...

  17. Statistical correction of the Winner’s Curse explains replication variability in quantitative trait genome-wide association studies

    PubMed Central

    Pe’er, Itsik

    2017-01-01

    Genome-wide association studies (GWAS) have identified hundreds of SNPs responsible for variation in human quantitative traits. However, genome-wide-significant associations often fail to replicate across independent cohorts, in apparent inconsistency with their apparent strong effects in discovery cohorts. This limited success of replication raises pervasive questions about the utility of the GWAS field. We identify all 332 studies of quantitative traits from the NHGRI-EBI GWAS Database with attempted replication. We find that the majority of studies provide insufficient data to evaluate replication rates. The remaining papers replicate significantly worse than expected (p < 10−14), even when adjusting for regression-to-the-mean of effect size between discovery- and replication-cohorts termed the Winner’s Curse (p < 10−16). We show this is due in part to misreporting replication cohort-size as a maximum number, rather than per-locus one. In 39 studies accurately reporting per-locus cohort-size for attempted replication of 707 loci in samples with similar ancestry, replication rate matched expectation (predicted 458, observed 457, p = 0.94). In contrast, ancestry differences between replication and discovery (13 studies, 385 loci) cause the most highly-powered decile of loci to replicate worse than expected, due to difference in linkage disequilibrium. PMID:28715421

  18. Sampling artifacts in perspective and stereo displays

    NASA Astrophysics Data System (ADS)

    Pfautz, Jonathan D.

    2001-06-01

    The addition of stereo cues to perspective displays is generally expected to improve the perception of depth. However, the display's pixel array samples both perspective and stereo depth cues, introducing inaccuracies and inconsistencies into the representation of an object's depth. The position, size and disparity of an object will be inaccurately presented and size and disparity will be inconsistently presented across depth. These inconsistencies can cause the left and right edges of an object to appear at different stereo depths. This paper describes how these inconsistencies result in conflicts between stereo and perspective depth information. A relative depth judgement task was used to explore these conflicts. Subjects viewed two objects and reported which appeared closer. Three conflicts resulting from inconsistencies caused by sampling were examined: (1) Perspective size and location versus stereo disparity. (2) Perspective size versus perspective location and stereo disparity. (3) Left and right edge disparity versus perspective size and location. In the first two cases, subjects achieved near-perfect accuracy when perspective and disparity cues were complementary. When size and disparity were inconsistent and thus in conflict, stereo dominated perspective. Inconsistency between the disparities of the horizontal edges of an object confused the subjects, even when complementary perspective and stereo information was provided. Since stereo was the dominant cue and was ambiguous across the object, this led to significantly reduced accuracy. Edge inconsistencies also led to more complaints about visual fatigue and discomfort.

  19. Is portion size selection associated with expected satiation, perceived healthfulness or expected tastiness? A case study on pizza using a photograph-based computer task.

    PubMed

    Labbe, D; Rytz, A; Godinot, N; Ferrage, A; Martin, N

    2017-01-01

    Increasing portion sizes over the last 30 years are considered to be one of the factors underlying overconsumption. Past research on the drivers of portion selection for foods showed that larger portions are selected for foods delivering low expected satiation. However, the respective contribution of expected satiation vs. two other potential drivers of portion size selection, i.e. perceived healthfulness and expected tastiness, has never been explored. In this study, we conjointly explored the role of expected satiation, perceived healthfulness and expected tastiness when selecting portions within a range of six commercial pizzas varying in their toppings and brands. For each product, 63 pizza consumers selected a portion size that would satisfy them for lunch and scored their expected satiation, perceived healthfulness and expected tastiness. As six participants selected an entire pizza as ideal portion independently of topping or brand, their data sets were not considered in the data analyses completed on responses from 57 participants. Hierarchical multiple regression analyses showed that portion size variance was predicted by perceived healthiness and expected tastiness variables. Two sub-groups of participants with different portion size patterns across pizzas were identified through post-hoc exploratory analysis. The explanatory power of the regression model was significantly improved by adding interaction terms between sub-group and expected satiation variables and between sub-group and perceived healthfulness variables to the model. Analysis at a sub-group level showed either positive or negative association between portion size and expected satiation depending on sub-groups. For one group, portion size selection was more health-driven and for the other, more hedonic-driven. These results showed that even when considering a well-liked product category, perceived healthfulness can be an important factor influencing portion size decision. Copyright © 2016 Nestec S.A. Published by Elsevier Ltd.. All rights reserved.

  20. An Unbiased Estimator of Gene Diversity with Improved Variance for Samples Containing Related and Inbred Individuals of any Ploidy

    PubMed Central

    Harris, Alexandre M.; DeGiorgio, Michael

    2016-01-01

    Gene diversity, or expected heterozygosity (H), is a common statistic for assessing genetic variation within populations. Estimation of this statistic decreases in accuracy and precision when individuals are related or inbred, due to increased dependence among allele copies in the sample. The original unbiased estimator of expected heterozygosity underestimates true population diversity in samples containing relatives, as it only accounts for sample size. More recently, a general unbiased estimator of expected heterozygosity was developed that explicitly accounts for related and inbred individuals in samples. Though unbiased, this estimator’s variance is greater than that of the original estimator. To address this issue, we introduce a general unbiased estimator of gene diversity for samples containing related or inbred individuals, which employs the best linear unbiased estimator of allele frequencies, rather than the commonly used sample proportion. We examine the properties of this estimator, H∼BLUE, relative to alternative estimators using simulations and theoretical predictions, and show that it predominantly has the smallest mean squared error relative to others. Further, we empirically assess the performance of H∼BLUE on a global human microsatellite dataset of 5795 individuals, from 267 populations, genotyped at 645 loci. Additionally, we show that the improved variance of H∼BLUE leads to improved estimates of the population differentiation statistic, FST, which employs measures of gene diversity within its calculation. Finally, we provide an R script, BestHet, to compute this estimator from genomic and pedigree data. PMID:28040781

  1. Evolution of sociality by natural selection on variances in reproductive fitness: evidence from a social bee.

    PubMed

    Stevens, Mark I; Hogendoorn, Katja; Schwarz, Michael P

    2007-08-29

    The Central Limit Theorem (CLT) is a statistical principle that states that as the number of repeated samples from any population increase, the variance among sample means will decrease and means will become more normally distributed. It has been conjectured that the CLT has the potential to provide benefits for group living in some animals via greater predictability in food acquisition, if the number of foraging bouts increases with group size. The potential existence of benefits for group living derived from a purely statistical principle is highly intriguing and it has implications for the origins of sociality. Here we show that in a social allodapine bee the relationship between cumulative food acquisition (measured as total brood weight) and colony size accords with the CLT. We show that deviations from expected food income decrease with group size, and that brood weights become more normally distributed both over time and with increasing colony size, as predicted by the CLT. Larger colonies are better able to match egg production to expected food intake, and better able to avoid costs associated with producing more brood than can be reared while reducing the risk of under-exploiting the food resources that may be available. These benefits to group living derive from a purely statistical principle, rather than from ecological, ergonomic or genetic factors, and could apply to a wide variety of species. This in turn suggests that the CLT may provide benefits at the early evolutionary stages of sociality and that evolution of group size could result from selection on variances in reproductive fitness. In addition, they may help explain why sociality has evolved in some groups and not others.

  2. Improving tritium exposure reconstructions using accelerator mass spectrometry

    PubMed Central

    Hunt, J. R.; Vogel, J. S.; Knezovich, J. P.

    2010-01-01

    Direct measurement of tritium atoms by accelerator mass spectrometry (AMS) enables rapid low-activity tritium measurements from milligram-sized samples and permits greater ease of sample collection, faster throughput, and increased spatial and/or temporal resolution. Because existing methodologies for quantifying tritium have some significant limitations, the development of tritium AMS has allowed improvements in reconstructing tritium exposure concentrations from environmental measurements and provides an important additional tool in assessing the temporal and spatial distribution of chronic exposure. Tritium exposure reconstructions using AMS were previously demonstrated for a tree growing on known levels of tritiated water and for trees exposed to atmospheric releases of tritiated water vapor. In these analyses, tritium levels were measured from milligram-sized samples with sample preparation times of a few days. Hundreds of samples were analyzed within a few months of sample collection and resulted in the reconstruction of spatial and temporal exposure from tritium releases. Although the current quantification limit of tritium AMS is not adequate to determine natural environmental variations in tritium concentrations, it is expected to be sufficient for studies assessing possible health effects from chronic environmental tritium exposure. PMID:14735274

  3. Strength and texture of Pt compressed to 63 GPa

    NASA Astrophysics Data System (ADS)

    Dorfman, Susannah M.; Shieh, Sean R.; Duffy, Thomas S.

    2015-02-01

    Angle- and energy-dispersive X-ray diffraction experiments in a radial geometry were performed in the diamond anvil cell on polycrystalline platinum samples at pressures up to 63 GPa. Observed yield strength and texture depend on grain size. For samples with 70-300-nm particle size, the yield strength is 5-6 GPa at ˜60 GPa. Coarse-grained (˜2-μm particles) Pt has a much lower yield strength of 1-1.5 GPa at ˜60 GPa. Face-centered cubic metals Pt and Au have lower strength to shear modulus ratio than body-centered cubic or hexagonal close-packed metals. While a 300-nm particle sample exhibits the <110> texture expected of face-centered-cubic metals under compression, smaller and larger particles show a weak mixed <110> and <100> texture under compression. Differences in texture development may also occur due to deviations from uniaxial stress under compression in the diamond anvil cell.

  4. Mechanical properties and failure behavior of unidirectional porous ceramics

    NASA Astrophysics Data System (ADS)

    Seuba, Jordi; Deville, Sylvain; Guizard, Christian; Stevenson, Adam J.

    2016-04-01

    We show that the honeycomb out-of-plane model derived by Gibson and Ashby can be applied to describe the compressive behavior of unidirectional porous materials. Ice-templating allowed us to process samples with accurate control over pore volume, size, and morphology. These samples allowed us to evaluate the effect of this microstructural variations on the compressive strength in a porosity range of 45-80%. The maximum strength of 286 MPa was achieved in the least porous ice-templated sample (P(%) = 49.9), with the smallest pore size (3 μm). We found that the out-of-plane model only holds when buckling is the dominant failure mode, as should be expected. Furthermore, we controlled total pore volume by adjusting solids loading and sintering temperature. This strategy allows us to independently control macroporosity and densification of walls, and the compressive strength of ice-templated materials is exclusively dependent on total pore volume.

  5. Mechanical properties and failure behavior of unidirectional porous ceramics.

    PubMed

    Seuba, Jordi; Deville, Sylvain; Guizard, Christian; Stevenson, Adam J

    2016-04-14

    We show that the honeycomb out-of-plane model derived by Gibson and Ashby can be applied to describe the compressive behavior of unidirectional porous materials. Ice-templating allowed us to process samples with accurate control over pore volume, size, and morphology. These samples allowed us to evaluate the effect of this microstructural variations on the compressive strength in a porosity range of 45-80%. The maximum strength of 286 MPa was achieved in the least porous ice-templated sample (P(%) = 49.9), with the smallest pore size (3 μm). We found that the out-of-plane model only holds when buckling is the dominant failure mode, as should be expected. Furthermore, we controlled total pore volume by adjusting solids loading and sintering temperature. This strategy allows us to independently control macroporosity and densification of walls, and the compressive strength of ice-templated materials is exclusively dependent on total pore volume.

  6. Accounting for randomness in measurement and sampling in studying cancer cell population dynamics.

    PubMed

    Ghavami, Siavash; Wolkenhauer, Olaf; Lahouti, Farshad; Ullah, Mukhtar; Linnebacher, Michael

    2014-10-01

    Knowing the expected temporal evolution of the proportion of different cell types in sample tissues gives an indication about the progression of the disease and its possible response to drugs. Such systems have been modelled using Markov processes. We here consider an experimentally realistic scenario in which transition probabilities are estimated from noisy cell population size measurements. Using aggregated data of FACS measurements, we develop MMSE and ML estimators and formulate two problems to find the minimum number of required samples and measurements to guarantee the accuracy of predicted population sizes. Our numerical results show that the convergence mechanism of transition probabilities and steady states differ widely from the real values if one uses the standard deterministic approach for noisy measurements. This provides support for our argument that for the analysis of FACS data one should consider the observed state as a random variable. The second problem we address is about the consequences of estimating the probability of a cell being in a particular state from measurements of small population of cells. We show how the uncertainty arising from small sample sizes can be captured by a distribution for the state probability.

  7. The interplay of various sources of noise on reliability of species distribution models hinges on ecological specialisation.

    PubMed

    Soultan, Alaaeldin; Safi, Kamran

    2017-01-01

    Digitized species occurrence data provide an unprecedented source of information for ecologists and conservationists. Species distribution model (SDM) has become a popular method to utilise these data for understanding the spatial and temporal distribution of species, and for modelling biodiversity patterns. Our objective is to study the impact of noise in species occurrence data (namely sample size and positional accuracy) on the performance and reliability of SDM, considering the multiplicative impact of SDM algorithms, species specialisation, and grid resolution. We created a set of four 'virtual' species characterized by different specialisation levels. For each of these species, we built the suitable habitat models using five algorithms at two grid resolutions, with varying sample sizes and different levels of positional accuracy. We assessed the performance and reliability of the SDM according to classic model evaluation metrics (Area Under the Curve and True Skill Statistic) and model agreement metrics (Overall Concordance Correlation Coefficient and geographic niche overlap) respectively. Our study revealed that species specialisation had by far the most dominant impact on the SDM. In contrast to previous studies, we found that for widespread species, low sample size and low positional accuracy were acceptable, and useful distribution ranges could be predicted with as few as 10 species occurrences. Range predictions for narrow-ranged species, however, were sensitive to sample size and positional accuracy, such that useful distribution ranges required at least 20 species occurrences. Against expectations, the MAXENT algorithm poorly predicted the distribution of specialist species at low sample size.

  8. Lowering sample size in comparative analyses can indicate a correlation where there is none: example from Rensch's rule in primates.

    PubMed

    Lindenfors, P; Tullberg, B S

    2006-07-01

    The fact that characters may co-vary in organism groups because of shared ancestry and not always because of functional correlations was the initial rationale for developing phylogenetic comparative methods. Here we point out a case where similarity due to shared ancestry can produce an undesired effect when conducting an independent contrasts analysis. Under special circumstances, using a low sample size will produce results indicating an evolutionary correlation between characters where an analysis of the same pattern utilizing a larger sample size will show that this correlation does not exist. This is the opposite effect of increased sample size to that expected; normally an increased sample size increases the chance of finding a correlation. The situation where the problem occurs is when co-variation between the two continuous characters analysed is clumped in clades; e.g. when some phylogenetically conservative factors affect both characters simultaneously. In such a case, the correlation between the two characters becomes contingent on the number of clades sharing this conservative factor that are included in the analysis, in relation to the number of species contained within these clades. Removing species scattered evenly over the phylogeny will in this case remove the exact variation that diffuses the evolutionary correlation between the two characters - the variation contained within the clades sharing the conservative factor. We exemplify this problem by discussing a parallel in nature where the described problem may be of importance. This concerns the question of the presence or absence of Rensch's rule in primates.

  9. Microwave resonant and zero-field absorption study of doped magnetite prepared by a co-precipitation method.

    PubMed

    Aphesteguy, Juan Carlos; Jacobo, Silvia E; Lezama, Luis; Kurlyandskaya, Galina V; Schegoleva, Nina N

    2014-06-19

    Fe3O4 and ZnxFe3-xO4 pure and doped magnetite magnetic nanoparticles (NPs) were prepared in aqueous solution (Series A) or in a water-ethyl alcohol mixture (Series B) by the co-precipitation method. Only one ferromagnetic resonance line was observed in all cases under consideration indicating that the materials are magnetically uniform. The shortfall in the resonance fields from 3.27 kOe (for the frequency of 9.5 GHz) expected for spheres can be understood taking into account the dipolar forces, magnetoelasticity, or magnetocrystalline anisotropy. All samples show non-zero low field absorption. For Series A samples the grain size decreases with an increase of the Zn content. In this case zero field absorption does not correlate with the changes of the grain size. For Series B samples the grain size and zero field absorption behavior correlate with each other. The highest zero-field absorption corresponded to 0.2 zinc concentration in both A and B series. High zero-field absorption of Fe3O4 ferrite magnetic NPs can be interesting for biomedical applications.

  10. Accuracy in parameter estimation for targeted effects in structural equation modeling: sample size planning for narrow confidence intervals.

    PubMed

    Lai, Keke; Kelley, Ken

    2011-06-01

    In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about the magnitude of the population targeted effects. With the goal of obtaining sufficiently narrow confidence intervals for the model parameters of interest, sample size planning methods for SEM are developed from the accuracy in parameter estimation approach. One method plans for the sample size so that the expected confidence interval width is sufficiently narrow. An extended procedure ensures that the obtained confidence interval will be no wider than desired, with some specified degree of assurance. A Monte Carlo simulation study was conducted that verified the effectiveness of the procedures in realistic situations. The methods developed have been implemented in the MBESS package in R so that they can be easily applied by researchers. © 2011 American Psychological Association

  11. Choosing a design to fit the situation: how to improve specificity and positive predictive values using Bayesian lot quality assurance sampling.

    PubMed

    Olives, Casey; Pagano, Marcello

    2013-02-01

    Lot Quality Assurance Sampling (LQAS) is a provably useful tool for monitoring health programmes. Although LQAS ensures acceptable Producer and Consumer risks, the literature alleges that the method suffers from poor specificity and positive predictive values (PPVs). We suggest that poor LQAS performance is due, in part, to variation in the true underlying distribution. However, until now the role of the underlying distribution in expected performance has not been adequately examined. We present Bayesian-LQAS (B-LQAS), an approach to incorporating prior information into the choice of the LQAS sample size and decision rule, and explore its properties through a numerical study. Additionally, we analyse vaccination coverage data from UNICEF's State of the World's Children in 1968-1989 and 2008 to exemplify the performance of LQAS and B-LQAS. Results of our numerical study show that the choice of LQAS sample size and decision rule is sensitive to the distribution of prior information, as well as to individual beliefs about the importance of correct classification. Application of the B-LQAS approach to the UNICEF data improves specificity and PPV in both time periods (1968-1989 and 2008) with minimal reductions in sensitivity and negative predictive value. LQAS is shown to be a robust tool that is not necessarily prone to poor specificity and PPV as previously alleged. In situations where prior or historical data are available, B-LQAS can lead to improvements in expected performance.

  12. DESCARTES' RULE OF SIGNS AND THE IDENTIFIABILITY OF POPULATION DEMOGRAPHIC MODELS FROM GENOMIC VARIATION DATA.

    PubMed

    Bhaskar, Anand; Song, Yun S

    2014-01-01

    The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the "folded" SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes' rule of signs for polynomials to the Laplace transform of piecewise continuous functions.

  13. DESCARTES’ RULE OF SIGNS AND THE IDENTIFIABILITY OF POPULATION DEMOGRAPHIC MODELS FROM GENOMIC VARIATION DATA1

    PubMed Central

    Bhaskar, Anand; Song, Yun S.

    2016-01-01

    The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the “folded” SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes’ rule of signs for polynomials to the Laplace transform of piecewise continuous functions. PMID:28018011

  14. Adaptively resizing populations: Algorithm, analysis, and first results

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.; Smuda, Ellen

    1993-01-01

    Deciding on an appropriate population size for a given Genetic Algorithm (GA) application can often be critical to the algorithm's success. Too small, and the GA can fall victim to sampling error, affecting the efficacy of its search. Too large, and the GA wastes computational resources. Although advice exists for sizing GA populations, much of this advice involves theoretical aspects that are not accessible to the novice user. An algorithm for adaptively resizing GA populations is suggested. This algorithm is based on recent theoretical developments that relate population size to schema fitness variance. The suggested algorithm is developed theoretically, and simulated with expected value equations. The algorithm is then tested on a problem where population sizing can mislead the GA. The work presented suggests that the population sizing algorithm may be a viable way to eliminate the population sizing decision from the application of GA's.

  15. Value of information analysis optimizing future trial design from a pilot study on catheter securement devices.

    PubMed

    Tuffaha, Haitham W; Reynolds, Heather; Gordon, Louisa G; Rickard, Claire M; Scuffham, Paul A

    2014-12-01

    Value of information analysis has been proposed as an alternative to the standard hypothesis testing approach, which is based on type I and type II errors, in determining sample sizes for randomized clinical trials. However, in addition to sample size calculation, value of information analysis can optimize other aspects of research design such as possible comparator arms and alternative follow-up times, by considering trial designs that maximize the expected net benefit of research, which is the difference between the expected cost of the trial and the expected value of additional information. To apply value of information methods to the results of a pilot study on catheter securement devices to determine the optimal design of a future larger clinical trial. An economic evaluation was performed using data from a multi-arm randomized controlled pilot study comparing the efficacy of four types of catheter securement devices: standard polyurethane, tissue adhesive, bordered polyurethane and sutureless securement device. Probabilistic Monte Carlo simulation was used to characterize uncertainty surrounding the study results and to calculate the expected value of additional information. To guide the optimal future trial design, the expected costs and benefits of the alternative trial designs were estimated and compared. Analysis of the value of further information indicated that a randomized controlled trial on catheter securement devices is potentially worthwhile. Among the possible designs for the future trial, a four-arm study with 220 patients/arm would provide the highest expected net benefit corresponding to 130% return-on-investment. The initially considered design of 388 patients/arm, based on hypothesis testing calculations, would provide lower net benefit with return-on-investment of 79%. Cost-effectiveness and value of information analyses were based on the data from a single pilot trial which might affect the accuracy of our uncertainty estimation. Another limitation was that different follow-up durations for the larger trial were not evaluated. The value of information approach allows efficient trial design by maximizing the expected net benefit of additional research. This approach should be considered early in the design of randomized clinical trials. © The Author(s) 2014.

  16. Prospective Evaluation of Intraprostatic Inflammation and Focal Atrophy as a Predictor of Risk of High-Grade Prostate Cancer and Recurrence after Prostatectomy

    DTIC Science & Technology

    2014-07-01

    the two trials. The expected sample size for this work was 100 cases and 200 controls. Tissue was sufficient for 291 of the men (Task 2 completed in...not collected in SELECT), physical activity (PCPT [not collected in SELECT), cigarette smoking status at randomization (SELECT), use of aspirin

  17. The feasibility of using explicit method for linear correction of the particle size variation using NIR Spectroscopy combined with PLS2regression method

    NASA Astrophysics Data System (ADS)

    Yulia, M.; Suhandy, D.

    2018-03-01

    NIR spectra obtained from spectral data acquisition system contains both chemical information of samples as well as physical information of the samples, such as particle size and bulk density. Several methods have been established for developing calibration models that can compensate for sample physical information variations. One common approach is to include physical information variation in the calibration model both explicitly and implicitly. The objective of this study was to evaluate the feasibility of using explicit method to compensate the influence of different particle size of coffee powder in NIR calibration model performance. A number of 220 coffee powder samples with two different types of coffee (civet and non-civet) and two different particle sizes (212 and 500 µm) were prepared. Spectral data was acquired using NIR spectrometer equipped with an integrating sphere for diffuse reflectance measurement. A discrimination method based on PLS-DA was conducted and the influence of different particle size on the performance of PLS-DA was investigated. In explicit method, we add directly the particle size as predicted variable results in an X block containing only the NIR spectra and a Y block containing the particle size and type of coffee. The explicit inclusion of the particle size into the calibration model is expected to improve the accuracy of type of coffee determination. The result shows that using explicit method the quality of the developed calibration model for type of coffee determination is a little bit superior with coefficient of determination (R2) = 0.99 and root mean square error of cross-validation (RMSECV) = 0.041. The performance of the PLS2 calibration model for type of coffee determination with particle size compensation was quite good and able to predict the type of coffee in two different particle sizes with relatively high R2 pred values. The prediction also resulted in low bias and RMSEP values.

  18. Role of CaCO3 and Charcoal Application on Organic Matter Retention in Silt-sized Aggregates

    NASA Astrophysics Data System (ADS)

    Berhe, A. A.; Kaiser, M.; Ghezzehei, T.; Myrold, D.; Kleber, M.

    2011-12-01

    The effectiveness of charcoal and calcium carbonate (CaCO3) applications to improve soil conditions has been well documented. However, their influence on the formation of silt-sized aggregates and the amount and protection of associated organic matter (OM) against microbial decomposition under differing soil mineralogical and microbiological conditions are still unknown. For sustainable management of agricultural soils, silt-sized aggregates (2-50 μm) are of particularly large importance because they store up to 60% of soil organic carbon and with mean residence times between 70 and 400 years. The objectives of this study are i) to analyze the ability of soil amendments (CaCO3, charcoal and their combined application) to increase the amount of silt-sized aggregates and associated organic matter, ii) vary soil mineral conditions to establish relevant boundary conditions for amendment-induced aggregation process, iii) to determine how amendment-induced changes in formation of silt-sized aggregates relate to microbial decomposition of OM. We set up artificial high reactive (clay: 40%, sand: 57%, SOM: 3%) and low reactive soils (clay: 10%, sand: 89%, SOM: 1%) and mixed them with charcoal (1%) and/or CaCO3 (0.2%). The samples were adjusted to a water potential of 0.3 bar using a nutrient solution and sub samples were incubated with microbial innoculum. After four months, silt-sized aggregates are separated by a combination of wet-sieving and sedimentation. We hypothesize that the relative increase in amount of silt-sized aggregates and associated OM is larger for less reactive soils than for high reactive soils because of a relative larger increase in binding agents by addition of charcoal and/or CaCO3 in less reactive soils. The effect of charcoal and/or CaCO3 application on the amount of silt-sized aggregates and associated OM is expected to increases with an increase in microbial activity. Between different treatments, we expect the incubated 'charcoal+CaCO3' combination to have the largest effect on silt-size scale aggregation processes because the amount of microbial derived cementing agents, charcoal derived functional groups containing OM, and Ca2+ ions are enhanced at the same time.

  19. Technical note: Alternatives to reduce adipose tissue sampling bias.

    PubMed

    Cruz, G D; Wang, Y; Fadel, J G

    2014-10-01

    Understanding the mechanisms by which nutritional and pharmaceutical factors can manipulate adipose tissue growth and development in production animals has direct and indirect effects in the profitability of an enterprise. Adipocyte cellularity (number and size) is a key biological response that is commonly measured in animal science research. The variability and sampling of adipocyte cellularity within a muscle has been addressed in previous studies, but no attempt to critically investigate these issues has been proposed in the literature. The present study evaluated 2 sampling techniques (random and systematic) in an attempt to minimize sampling bias and to determine the minimum number of samples from 1 to 15 needed to represent the overall adipose tissue in the muscle. Both sampling procedures were applied on adipose tissue samples dissected from 30 longissimus muscles from cattle finished either on grass or grain. Briefly, adipose tissue samples were fixed with osmium tetroxide, and size and number of adipocytes were determined by a Coulter Counter. These results were then fit in a finite mixture model to obtain distribution parameters of each sample. To evaluate the benefits of increasing number of samples and the advantage of the new sampling technique, the concept of acceptance ratio was used; simply stated, the higher the acceptance ratio, the better the representation of the overall population. As expected, a great improvement on the estimation of the overall adipocyte cellularity parameters was observed using both sampling techniques when sample size number increased from 1 to 15 samples, considering both techniques' acceptance ratio increased from approximately 3 to 25%. When comparing sampling techniques, the systematic procedure slightly improved parameters estimation. The results suggest that more detailed research using other sampling techniques may provide better estimates for minimum sampling.

  20. Assessing overland sediment transport to the Apalachicola River/Bay in Florida

    NASA Astrophysics Data System (ADS)

    Smar, D. E.; Hagen, S.; Daranpob, A.; Passeri, D.

    2011-12-01

    An ongoing study in Franklin County, Florida is focused on classifying the mechanisms of sediment transport from the overland areas to eventual deposition in the Apalachicola River and surrounding estuaries. Sediment cores and water column samples were collected at various locations along the Apalachicola River, its tributaries, and distributaries over a two-week period during the wet season. A preliminary particle size distribution analysis of the sediment cores and water column samples demonstrates decreasing particle sizes as the river and wetlands progress toward the ocean. Daily water samples from the mouth of the Apalachicola River and two distributaries reveal fluctuating total suspended solid (TSS) concentrations. To understand these deviations, flow rate and water level at each location is inspected. Because the nearest USGS gage is approximately 16 miles upstream from these sites, investigation of the hydrodynamic influences of sediment transport is conducted by developing a hydrodynamic model simulating river flow and tides in the Apalachicola River and bay system. With spatially accurate flow rates and water levels, an attempt can be made to correlate flow rate with fluctuating TSS concentrations. Precipitation events during the sampling period also support spikes in the TSS concentrations as expected. Assessing sediment transport to the river/bay system will lead to a better understanding of the regression or accretion of the river's alluvial fan and the marsh platform. High flow periods following extreme rain events (which are expected to intensify under global climate change) transport more sediment downstream, however, the interaction with tidal and sea level effects are still being analyzed. With rising sea levels, it is expected that the alluvial fan will recede and wetland areas may migrate inland gradually transforming existing dry lands such as pine forests into new wetland regions. Future work will include an analysis of the tidal cycle during the sampling period to more accurately classify fluctuation of TSS concentration in the downstream samples. The data collection process and laboratory analysis will also be repeated in the dry season, and subsequent years to observe temporal trends.

  1. Atomic force microscope observation of branching in single transcript molecules derived from human cardiac muscle

    NASA Astrophysics Data System (ADS)

    Reed, Jason; Hsueh, Carlin; Mishra, Bud; Gimzewski, James K.

    2008-09-01

    We have used an atomic force microscope to examine a clinically derived sample of single-molecule gene transcripts, in the form of double-stranded cDNA, (c: complementary) obtained from human cardiac muscle without the use of polymerase chain reaction (PCR) amplification. We observed a log-normal distribution of transcript sizes, with most molecules being in the range of 0.4-7.0 kilobase pairs (kb) or 130-2300 nm in contour length, in accordance with the expected distribution of mRNA (m: messenger) sizes in mammalian cells. We observed novel branching structures not previously known to exist in cDNA, and which could have profound negative effects on traditional analysis of cDNA samples through cloning, PCR and DNA sequencing.

  2. Massive Galaxies Are Larger in Dense Environments: Environmental Dependence of Mass-Size Relation of Early-type Galaxies

    NASA Astrophysics Data System (ADS)

    Yoon, Yongmin; Im, Myungshin; Kim, Jae-Woo

    2017-01-01

    Under the Λ cold dark matter (ΛCDM) cosmological models, massive galaxies are expected to be larger in denser environments through frequent hierarchical mergers with other galaxies. Yet, observational studies of low-redshift early-type galaxies have shown no such trend, standing as a puzzle to solve during the past decade. We analyzed 73,116 early-type galaxies at 0.1 ≤ z < 0.15, adopting a robust nonparametric size measurement technique and extending the analysis to many massive galaxies. We find for the first time that local early-type galaxies heavier than 1011.2 M⊙ show a clear environmental dependence in mass-size relation, in such a way that galaxies are as much as 20%-40% larger in the densest environments than in underdense environments. Splitting the sample into the brightest cluster galaxies (BCGs) and non-BCGs does not affect the result. This result agrees with the ΛCDM cosmological simulations and suggests that mergers played a significant role in the growth of massive galaxies in dense environments as expected in theory.

  3. Ion concentration in micro and nanoscale electrospray emitters.

    PubMed

    Yuill, Elizabeth M; Baker, Lane A

    2018-06-01

    Solution-phase ion transport during electrospray has been characterized for nanopipettes, or glass capillaries pulled to nanoscale tip dimensions, and micron-sized electrospray ionization emitters. Direct visualization of charged fluorophores during the electrospray process is used to evaluate impacts of emitter size, ionic strength, analyte size, and pressure-driven flow on heterogeneous ion transport during electrospray. Mass spectrometric measurements of positively- and negatively-charged proteins were taken for micron-sized and nanopipette emitters under low ionic strength conditions to further illustrate a discrepancy in solution-driven transport of charged analytes. A fundamental understanding of analyte electromigration during electrospray, which is not always considered, is expected to provide control over selective analyte depletion and enrichment, and can be harnessed for sample cleanup. Graphical abstract Fluorescence micrographs of ion migration in nanoscale pipettes while solution is electrosprayed.

  4. Using re-randomization to increase the recruitment rate in clinical trials - an assessment of three clinical areas.

    PubMed

    Kahan, Brennan C

    2016-12-13

    Patient recruitment in clinical trials is often challenging, and as a result, many trials are stopped early due to insufficient recruitment. The re-randomization design allows patients to be re-enrolled and re-randomized for each new treatment episode that they experience. Because it allows multiple enrollments for each patient, this design has been proposed as a way to increase the recruitment rate in clinical trials. However, it is unknown to what extent recruitment could be increased in practice. We modelled the expected recruitment rate for parallel-group and re-randomization trials in different settings based on estimates from real trials and datasets. We considered three clinical areas: in vitro fertilization, severe asthma exacerbations, and acute sickle cell pain crises. We compared the two designs in terms of the expected time to complete recruitment, and the sample size recruited over a fixed recruitment period. Across the different scenarios we considered, we estimated that re-randomization could reduce the expected time to complete recruitment by between 4 and 22 months (relative reductions of 19% and 45%), or increase the sample size recruited over a fixed recruitment period by between 29% and 171%. Re-randomization can increase recruitment most for trials with a short follow-up period, a long trial recruitment duration, and patients with high rates of treatment episodes. Re-randomization has the potential to increase the recruitment rate in certain settings, and could lead to quicker and more efficient trials in these scenarios.

  5. Magnetic and critical properties of Pr0.6Sr0.4MnO3 nanocrystals prepared by a combination of the solid state reaction and the mechanical ball milling methods

    NASA Astrophysics Data System (ADS)

    Dung, Nguyen Thi; Linh, Dinh Chi; Huyen Yen, Pham Duc; Yu, Seong Cho; Van Dang, Nguyen; Dang Thanh, Tran

    2018-06-01

    Influence of the crystallite size on the magnetic and critical properties of nanocrystals has been investigated. The results show that Curie temperature and magnetization slightly decrease with decreasing average crystallite size . Based on the mean-field theory and the magnetic-field dependences of magnetization at different temperatures , we pointed out that the ferromagnetic-paramagnetic phase transition in the samples undergoes the second-order phase transition with the critical exponents (, , and ) close to those of the mean-field theory. However, there is a small deviation from those expected for the mean-field theory of the values of , and obtained for the samples. It means that short-range ferromagnetic interactions appear in the smaller particles. In other words, nanocrystals become more magnetically inhomogeneous with smaller crystallite sizes that could be explained by the presence of surface-related effects, lattice strain and distortions, which lead the strength of ferromagnetic interaction is decreased in the small crystallite sizes.

  6. Two Different Views on the World Around Us: The World of Uniformity versus Diversity.

    PubMed

    Kwon, JaeHwan; Nayakankuppam, Dhananjay

    2016-01-01

    We propose that when individuals believe in fixed traits of personality (entity theorists), they are likely to expect a world of "uniformity." As such, they easily infer a population statistic from a small sample of data with confidence. In contrast, individuals who believe in malleable traits of personality (incremental theorists) are likely to presume a world of "diversity," such that they "hesitate" to infer a population statistic from a similarly sized sample. In four laboratory experiments, we found that compared to incremental theorists, entity theorists estimated a population mean from a sample with a greater level of confidence (Studies 1a and 1b), expected more homogeneity among the entities within a population (Study 2), and perceived an extreme value to be more indicative of an outlier (Study 3). These results suggest that individuals are likely to use their implicit self-theory orientations (entity theory versus incremental theory) to see a population in general as a constitution either of homogeneous or heterogeneous entities.

  7. Influence of grain size and sintering temperature grain size on the critical behavior near the paramagnetic to ferromagnetic phase transition temperature in La0.67Sr0.33MnO3 nanoparticles

    NASA Astrophysics Data System (ADS)

    Baaziz, H.; Tozri, A.; Dhahri, E.; Hlil, E. K.

    2018-03-01

    We have undertaken a systematic study of critical behavior in La0.67Sr0.33MnO3 nanoparticles, sintered at different temperatures (L6, L8, L10 and L12 sintered at 600 °C, 800 °C, 1000 °C, 1200 °C respectively), by magnetization measurements. The critical exponents are estimated by various techniques such as the Modified Arrott plot, Kouvel-Fisher plot and critical isotherm technique. Compared to standard models, the critical exponents are close to those expected by the Mean-field model (with β = 0.5 γ = 1, and δ = 3) for (L6, L8, and L10) samples and by the (3D) Heisenberg model (β = 0.365, γ = 1.336 and δ = 4.80) for L12 sample. We conclude that the reduction of grain size strongly influences the universality class.

  8. Using an EM Covariance Matrix to Estimate Structural Equation Models with Missing Data: Choosing an Adjusted Sample Size to Improve the Accuracy of Inferences

    ERIC Educational Resources Information Center

    Enders, Craig K.; Peugh, James L.

    2004-01-01

    Two methods, direct maximum likelihood (ML) and the expectation maximization (EM) algorithm, can be used to obtain ML parameter estimates for structural equation models with missing data (MD). Although the 2 methods frequently produce identical parameter estimates, it may be easier to satisfy missing at random assumptions using EM. However, no…

  9. Authoritative school climate and high school dropout rates.

    PubMed

    Jia, Yuane; Konold, Timothy R; Cornell, Dewey

    2016-06-01

    This study tested the association between school-wide measures of an authoritative school climate and high school dropout rates in a statewide sample of 315 high schools. Regression models at the school level of analysis used teacher and student measures of disciplinary structure, student support, and academic expectations to predict overall high school dropout rates. Analyses controlled for school demographics of school enrollment size, percentage of low-income students, percentage of minority students, and urbanicity. Consistent with authoritative school climate theory, moderation analyses found that when students perceive their teachers as supportive, high academic expectations are associated with lower dropout rates. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  10. Genetic variation, relatedness, and effective population size of polar bears (Ursus maritimus) in the southern Beaufort Sea, Alaska.

    PubMed

    Cronin, Matthew A; Amstrup, Steven C; Talbot, Sandra L; Sage, George K; Amstrup, Kristin S

    2009-01-01

    Polar bears (Ursus maritimus) are unique among bears in that they are adapted to the Arctic sea ice environment. Genetic data are useful for understanding their evolution and can contribute to management. We assessed parentage and relatedness of polar bears in the southern Beaufort Sea, Alaska, with genetic data and field observations of age, sex, and mother-offspring and sibling relationships. Genotypes at 14 microsatellite DNA loci for 226 bears indicate that genetic variation is comparable to other populations of polar bears with mean number of alleles per locus of 7.9 and observed and expected heterozygosity of 0.71. The genetic data verified 60 field-identified mother-offspring pairs and identified 10 additional mother-cub pairs and 48 father-offspring pairs. The entire sample of related and unrelated bears had a mean pairwise relatedness index (r(xy)) of approximately zero, parent-offspring and siblings had r(xy) of approximately 0.5, and 5.2% of the samples had r(xy) values within the range expected for parent-offspring. Effective population size (N(e) = 277) and the ratio of N(e) to total population size (N(e)/N = 0.182) were estimated from the numbers of reproducing males and females. N(e) estimates with genetic methods gave variable results. Our results verify and expand field data on reproduction by females and provide new data on reproduction by males and estimates of relatedness and N(e) in a polar bear population.

  11. Genetic variation, relatedness, and effective population size of polar bears (Ursus maritimus) in the southern Beaufort Sea, Alaska

    USGS Publications Warehouse

    Cronin, Matthew A.; Amstrup, Steven C.; Talbot, Sandra L.; Sage, George K.; Amstrup, Kristin S.

    2009-01-01

    Polar bears (Ursus maritimus) are unique among bears in that they are adapted to the Arctic sea ice environment. Genetic data are useful for understanding their evolution and can contribute to management. We assessed parentage and relatedness of polar bears in the southern Beaufort Sea, Alaska, with genetic data and field observations of age, sex, and mother–offspring and sibling relationships. Genotypes at 14 microsatellite DNA loci for 226 bears indicate that genetic variation is comparable to other populations of polar bears with mean number of alleles per locus of 7.9 and observed and expected heterozygosity of 0.71. The genetic data verified 60 field-identified mother–offspring pairs and identified 10 additional mother–cub pairs and 48 father–offspring pairs. The entire sample of related and unrelated bears had a mean pairwise relatedness index (rxy) of approximately zero, parent–offspring and siblings had rxy of approximately 0.5, and 5.2% of the samples had rxy values within the range expected for parent-offspring. Effective population size (Ne= 277) and the ratio of Ne to total population size (Ne/N = 0.182) were estimated from the numbers of reproducing males and females. Ne estimates with genetic methods gave variable results. Our results verify and expand field data on reproduction by females and provide new data on reproduction by males and estimates of relatedness and Ne in a polar bear population.

  12. Field application of a multi-frequency acoustic instrument to monitor sediment for silt erosion study in Pelton turbine in Himalayan region, India

    NASA Astrophysics Data System (ADS)

    Rai, A. K.; Kumar, A.; Hies, T.; Nguyen, H. H.

    2016-11-01

    High sediment load passing through hydropower components erodes the hydraulic components resulting in loss of efficiency, interruptions in power production and downtime for repair/maintenance, especially in Himalayan regions. The size and concentration of sediment play a major role in silt erosion. The traditional process of collecting samples manually to analyse in laboratory cannot suffice the need of monitoring temporal variation in sediment properties. In this study, a multi-frequency acoustic instrument was applied at desilting chamber to monitor sediment size and concentration entering the turbine. The sediment size and concentration entering the turbine were also measured with manual samples collected twice daily. The samples collected manually were analysed in laboratory with a laser diffraction instrument for size and concentration apart from analysis by drying and filtering methods for concentration. A conductivity probe was used to calculate total dissolved solids, which was further used in results from drying method to calculate suspended solid content of the samples. The acoustic instrument was found to provide sediment concentration values similar to drying and filtering methods. However, no good match was found between mean grain size from the acoustic method with the current status of development and laser diffraction method in the first field application presented here. The future versions of the software and significant sensitivity improvements of the ultrasonic transducers are expected to increase the accuracy in the obtained results. As the instrument is able to capture the concentration and in the future most likely more accurate mean grain size of the suspended sediments, its application for monitoring silt erosion in hydropower plant shall be highly useful.

  13. Test anxiety in mathematics among early undergraduate students in a British university in Malaysia

    NASA Astrophysics Data System (ADS)

    Karjanto, Natanael; Yong, Su Ting

    2013-03-01

    The level of test anxiety in mathematics subjects among early undergraduate students at the University of Nottingham Malaysia Campus is studied in this article. The sample consists of 206 students taking several mathematics modules who completed the questionnaires on test anxiety just before they entered the venue for midterm examinations. The sample data include the differences in the context of academic levels, gender groups and nationality backgrounds. The level of test anxiety in mathematics is measured using seven Likert questionnaire statements adapted from the Test Anxiety Inventory describing one's emotional feeling before the start of an examination. In general, the result shows that the students who had a lower score expectation were more anxious than those who had a higher score expectation, but that they obtained a better score than the expected score. In the context of academic levels, gender groups and nationality backgrounds, there were no significant correlations between the level of test anxiety and the students' academic performance. The effect size of the correlation values ranged from extremely small to moderate.

  14. Phytoestrogens and mycotoxins in Iowa streams: An examination of underinvestigated compounds in agricultural basins

    USGS Publications Warehouse

    Kolpin, Dana W.; Hoerger, Corinne C.; Meyer, Michael T.; Wettstein, Felix E.; Hubbard, Laura E.; Bucheli, Thomas D.

    2010-01-01

    This study provides the first broad-scale investigation on the spatial and temporal occurrence of phytoestrogens and mycotoxins in streams in the United States. Fifteen stream sites across Iowa were sampled five times throughout the 2008 growing season to capture a range of climatic and crop-growth conditions. Basin size upstream from sampling sites ranged from 7 km2 to >836,000 km2 Atrazine (herbicide) also was measured in all samples as a frame-of-reference agriculturally derived contaminant. Target compounds were frequently detected in stream samples: atrazine (100%), formononetin (80%), equol (45%), deoxynivalenol (43%), daidzein (32%), biochanin A (23%), zearalenone (13%), and genistein (11%). The nearly ubiquitous detection of formononetin (isoflavone) suggests a widespread agricultural source, as one would expect with the intense row crop and livestock production present across Iowa. Conversely, the less spatially widespread detections of deoxynivalenol (mycotoxin) suggest a more variable source due to the required combination of proper host and proper temperature and moisture conditions necessary to promote Fusarium spp. infections. Although atrazine concentrations commonly exceeded 100 ng L-1 (42/75 measurements), only deoxynivalenol (6/56 measurements) had concentrations that occasionally exceeded this level. Temporal patterns in concentrations varied substantially between atrazine, formononetin, and deoxynivalenol, as one would expect for contaminants with different source inputs and processes of formation and degradation. The greatest phytoestrogen and mycotoxin concentrations were observed during spring snowmelt conditions. Phytoestrogens and mycotoxins were detected at all sampling sites regardless of basin size. The ecotoxicological effects from long-term, low-level exposures to phytoestrogens and mycotoxins or complex chemicals mixtures including these compounds that commonly take place in surface water are poorly understood and have yet to be systematically investigated in environmental studies.

  15. Phytoestrogens and mycotoxins in Iowa streams: An examination of underinvestigated compounds in agricultural basins

    USGS Publications Warehouse

    Kolpin, D.W.; Hoerger, C.C.; Meyer, M.T.; Wettstein, F.E.; Hubbard, L.E.; Bucheli, T.D.

    2010-01-01

    This study provides the first broad-scale investigation on the spatial and temporal occurrence of phytoestrogens and mycotoxins in streams in the United States. Fifteen stream sites across Iowa were sampled five times throughout the 2008 growing season to capture a range of climatic and crop-growth conditions. Basin size upstream from sampling sites ranged from 7 km2 to >836,000 km2. Atrazine (herbicide) also was measured in all samples as a frame-ofreference agriculturally derived contaminant. Target compounds were frequently detected in stream samples: atrazine (100%), formononetin (80%), equol (45%), deoxynivalenol (43%), daidzein (32%), biochanin A (23%), zearalenone (13%), and genistein (11%). Th e nearly ubiquitous detection of formononetin (isoflavone) suggests a widespread agricultural source, as one would expect with the intense row crop and livestock production present across Iowa. Conversely, the less spatially widespread detections of deoxynivalenol (mycotoxin) suggest a more variable source due to the required combination of proper host and proper temperature and moisture conditions necessary to promote Fusarium spp. infections. Although atrazine concentrations commonly exceeded 100 ng L-1 (42/75 measurements), only deoxynivalenol (6/56 measurements) had concentrations that occasionally exceeded this level. Temporal patterns in concentrations varied substantially between atrazine, formononetin, and deoxynivalenol, as one would expect for contaminants with different source inputs and processes of formation and degradation. The greatest phytoestrogen and mycotoxin concentrations were observed during spring snowmelt conditions. Phytoestrogens and mycotoxins were detected at all sampling sites regardless of basin size. The ecotoxicological effects from long-term, low-level exposures to phytoestrogens and mycotoxins or complex chemicals mixtures including these compounds that commonly take place in surface water are poorly understood and have yet to be systematically investigated in environmental studies. Copyright ?? 2010 by the American Society of Agronomy.

  16. Direct Analysis of Low-Volatile Molecular Marker Extract from Airborne Particulate Matter Using Sensitivity Correction Method

    PubMed Central

    Irei, Satoshi

    2016-01-01

    Molecular marker analysis of environmental samples often requires time consuming preseparation steps. Here, analysis of low-volatile nonpolar molecular markers (5-6 ring polycyclic aromatic hydrocarbons or PAHs, hopanoids, and n-alkanes) without the preseparation procedure is presented. Analysis of artificial sample extracts was directly conducted by gas chromatography-mass spectrometry (GC-MS). After every sample injection, a standard mixture was also analyzed to make a correction on the variation of instrumental sensitivity caused by the unfavorable matrix contained in the extract. The method was further validated for the PAHs using the NIST standard reference materials (SRMs) and then applied to airborne particulate matter samples. Tests with the SRMs showed that overall our methodology was validated with the uncertainty of ~30%. The measurement results of airborne particulate matter (PM) filter samples showed a strong correlation between the PAHs, implying the contributions from the same emission source. Analysis of size-segregated PM filter samples showed that their size distributions were found to be in the PM smaller than 0.4 μm aerodynamic diameter. The observations were consistent with our expectation of their possible sources. Thus, the method was found to be useful for molecular marker studies. PMID:27127511

  17. Choosing a design to fit the situation: how to improve specificity and positive predictive values using Bayesian lot quality assurance sampling

    PubMed Central

    Olives, Casey; Pagano, Marcello

    2013-01-01

    Background Lot Quality Assurance Sampling (LQAS) is a provably useful tool for monitoring health programmes. Although LQAS ensures acceptable Producer and Consumer risks, the literature alleges that the method suffers from poor specificity and positive predictive values (PPVs). We suggest that poor LQAS performance is due, in part, to variation in the true underlying distribution. However, until now the role of the underlying distribution in expected performance has not been adequately examined. Methods We present Bayesian-LQAS (B-LQAS), an approach to incorporating prior information into the choice of the LQAS sample size and decision rule, and explore its properties through a numerical study. Additionally, we analyse vaccination coverage data from UNICEF’s State of the World’s Children in 1968–1989 and 2008 to exemplify the performance of LQAS and B-LQAS. Results Results of our numerical study show that the choice of LQAS sample size and decision rule is sensitive to the distribution of prior information, as well as to individual beliefs about the importance of correct classification. Application of the B-LQAS approach to the UNICEF data improves specificity and PPV in both time periods (1968–1989 and 2008) with minimal reductions in sensitivity and negative predictive value. Conclusions LQAS is shown to be a robust tool that is not necessarily prone to poor specificity and PPV as previously alleged. In situations where prior or historical data are available, B-LQAS can lead to improvements in expected performance. PMID:23378151

  18. Radiographic analysis of vocal tract length and its relation to overall body size in two canid species.

    PubMed

    Plotsky, K; Rendall, D; Riede, T; Chase, K

    2013-09-01

    Body size is an important determinant of resource and mate competition in many species. Competition is often mediated by conspicuous vocal displays, which may help to intimidate rivals and attract mates by providing honest cues to signaler size. Fitch proposed that vocal tract resonances (or formants) should provide particularly good, or honest, acoustic cues to signaler size because they are determined by the length of the vocal tract, which in turn, is hypothesized to scale reliably with overall body size. There is some empirical support for this hypothesis, but to date, many of the effects have been either mixed for males compared with females, weaker than expected in one or the other sex, or complicated by sampling issues. In this paper, we undertake a direct test of Fitch's hypothesis in two canid species using large samples that control for age- and sex-related variation. The samples involved radiographic images of 120 Portuguese water dogs Canis lupus familiaris and 121 Russian silver foxes Vulpes vulpes . Direct measurements were made of vocal tract length from X-ray images and compared against independent measures of body size. In adults of both species, and within both sexes, overall vocal tract length was strongly and significantly correlated with body size. Effects were strongest for the oral component of the vocal tract. By contrast, the length of the pharyngeal component was not as consistently related to body size. These outcomes are some of the clearest evidence to date in support of Fitch's hypothesis. At the same time, they highlight the potential for elements of both honest and deceptive body signaling to occur simultaneously via differential acoustic cues provided by the oral versus pharyngeal components of the vocal tract.

  19. Radiographic analysis of vocal tract length and its relation to overall body size in two canid species

    PubMed Central

    Plotsky, K.; Rendall, D.; Riede, T.; Chase, K.

    2013-01-01

    Body size is an important determinant of resource and mate competition in many species. Competition is often mediated by conspicuous vocal displays, which may help to intimidate rivals and attract mates by providing honest cues to signaler size. Fitch proposed that vocal tract resonances (or formants) should provide particularly good, or honest, acoustic cues to signaler size because they are determined by the length of the vocal tract, which in turn, is hypothesized to scale reliably with overall body size. There is some empirical support for this hypothesis, but to date, many of the effects have been either mixed for males compared with females, weaker than expected in one or the other sex, or complicated by sampling issues. In this paper, we undertake a direct test of Fitch’s hypothesis in two canid species using large samples that control for age- and sex-related variation. The samples involved radiographic images of 120 Portuguese water dogs Canis lupus familiaris and 121 Russian silver foxes Vulpes vulpes. Direct measurements were made of vocal tract length from X-ray images and compared against independent measures of body size. In adults of both species, and within both sexes, overall vocal tract length was strongly and significantly correlated with body size. Effects were strongest for the oral component of the vocal tract. By contrast, the length of the pharyngeal component was not as consistently related to body size. These outcomes are some of the clearest evidence to date in support of Fitch’s hypothesis. At the same time, they highlight the potential for elements of both honest and deceptive body signaling to occur simultaneously via differential acoustic cues provided by the oral versus pharyngeal components of the vocal tract. PMID:24363497

  20. Insights into bioassessment of marine pollution using body-size distinctness of planktonic ciliates based on a modified trait hierarchy.

    PubMed

    Xu, Henglong; Jiang, Yong; Xu, Guangjian

    2016-06-15

    Based on a modified trait hierarchy of body-size units, the feasibility for bioassessment of water pollution using body-size distinctness of planktonic ciliates was studied in a semi-enclosed bay, northern China. An annual dataset was collected at five sampling stations within a gradient of heavy metal contaminants. Results showed that: (1) in terms of probability density, the body-size spectra of the ciliates represented significant differences among the five stations; (2) bootstrap average analysis demonstrated a spatial variation in body-size rank patterns in response to pollution stress due to heavy metals; and (3) the average body-size distinctness (Δz(+)) and variation in body-size distinctness (Λz(+)), based on the modified trait hierarchy, revealed a clear departure pattern from the expected body-size spectra in areas with pollutants. These results suggest that the body-size diversity measures based on the modified trait hierarchy of the ciliates may be used as a potential indicator of marine pollution. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Brief communication: the relation between standard error of the estimate and sample size of histomorphometric aging methods.

    PubMed

    Hennig, Cheryl; Cooper, David

    2011-08-01

    Histomorphometric aging methods report varying degrees of precision, measured through Standard Error of the Estimate (SEE). These techniques have been developed from variable samples sizes (n) and the impact of n on reported aging precision has not been rigorously examined in the anthropological literature. This brief communication explores the relation between n and SEE through a review of the literature (abstracts, articles, book chapters, theses, and dissertations), predictions based upon sampling theory and a simulation. Published SEE values for age prediction, derived from 40 studies, range from 1.51 to 16.48 years (mean 8.63; sd: 3.81 years). In general, these values are widely distributed for smaller samples and the distribution narrows as n increases--a pattern expected from sampling theory. For the two studies that have samples in excess of 200 individuals, the SEE values are very similar (10.08 and 11.10 years) with a mean of 10.59 years. Assuming this mean value is a 'true' characterization of the error at the population level, the 95% confidence intervals for SEE values from samples of 10, 50, and 150 individuals are on the order of ± 4.2, 1.7, and 1.0 years, respectively. While numerous sources of variation potentially affect the precision of different methods, the impact of sample size cannot be overlooked. The uncertainty associated with SEE values derived from smaller samples complicates the comparison of approaches based upon different methodology and/or skeletal elements. Meaningful comparisons require larger samples than have frequently been used and should ideally be based upon standardized samples. Copyright © 2011 Wiley-Liss, Inc.

  2. Millimeter-Wave Absorption as a Quality Control Tool for M-Type Hexaferrite Nanopowders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCloy, John S.; Korolev, Konstantin A.; Crum, Jarrod V.

    2013-01-01

    Millimeter wave (MMW) absorption measurements have been conducted on commercial samples of large (micrometer-sized) and small (nanometer-sized) particles of BaFe12O19 and SrFe12O19 using a quasi-optical MMW spectrometer and a series of backwards wave oscillators encompassing the 30-120 GHz range. Effective anisotropy of the particles calculated from the resonant absorption frequency indicates lower overall anisotropy in the nano-particles. Due to their high magnetocrystalline anisotropy, both BaFe12O19 and SrFe12O19 are expected to have spin resonances in the 45-55 GHz range. Several of the sampled BaFe12O19 powders did not have MMW absorptions, so they were further investigated by DC magnetization and x-ray diffractionmore » to assess magnetic behavior and structure. The samples with absent MMW absorption contained primarily iron oxides, suggesting that MMW absorption could be used for quality control in hexaferrite powder manufacture.« less

  3. Ferromagnetic resonance studies of lunar core stratigraphy

    NASA Technical Reports Server (NTRS)

    Housley, R. M.; Cirlin, E. H.; Goldberg, I. B.; Crowe, H.

    1976-01-01

    We first review the evidence which links the characteristic ferromagnetic resonance observed in lunar fines samples with agglutinatic glass produced primarily by micrometeorite impacts and present new results on Apollo 15, 16, and 17 breccias which support this link by showing that only regolith breccias contribute significantly to the characteristic FMR intensity. We then provide a calibration of the amount of Fe metal in the form of uniformly magnetized spheres required to give our observed FMR intensities and discuss the theoretical magnetic behavior to be expected of Fe spheres as a function of size. Finally, we present FMR results on samples from every 5 mm interval in the core segments 60003, 60009, and 70009. These results lead us to suggest: (1) that secondary mixing may generally be extensive during regolith deposition so that buried regolith surfaces are hard to recognize or define; and (2) that local grinding of rocks and pebbles during deposition may lead to short scale fluctuations in grain size, composition, and apparent exposure age of samples.

  4. Particle size dependence of alkali and alkaline earth metal enrichment in marine aerosols from Bermuda

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, E.J.; Hoffman, G.L.; Duce, R.A.

    1980-10-20

    Three cascade impactor samples were collected from a 20-m-high tower on the southeastern coast of Bermuda. These samples were analyzed for Na, K, Ca, Mg, and Fe by atomic absorption spectrophotometry. When the alkali-alkakine earth metal concentrations are corrected for a soil-derived component, utilizing the atmospheric Fe concentrations, Mg, Ca, and Na are found to be present in the same relative abundances as in seawater for all particle sizes sampled. Potassium also shows no deviation from a bulk seawater composition for particles with radii greater than approx.0.5 ..mu..m. However, excess K above that expected from either a bulk seawater ormore » soil source is observed on particles with radii less than approx.0.5 ..mu..m. While oceanic chemical fractionation processes during bubble bursting may be responsible for this excess small particle K, it is most likely due to long-range transport of K-rich particles of terrestrial vegetative origin.« less

  5. Development of a High-Resolution, Single-Photon X-Ray Detector

    NASA Technical Reports Server (NTRS)

    Seidel, George M.

    1996-01-01

    Research on the development of a low-temperature, magnetic bolometer for x-ray detection is reported. The principal accomplishments during the first phase of this research are as follows. (1) We have constructed SQUID magnetometers and detected both 122 keV and 6 keV x-rays in relatively larger metallic samples with high quantum efficiency. (2) The magnetic properties of a metal sample with localized paramagnetic spins have been measured and found to agree with theoretical expectations. (3) The size of the magnetic response of the sample to x-rays is in agreement with predictions based on the properties of the sample and sensitivity of the magnetometer, supporting the prediction that a resolution of 1 eV at 10 keV should be achievable.

  6. Consideration of Kaolinite Interference Correction for Quartz Measurements in Coal Mine Dust

    PubMed Central

    Lee, Taekhee; Chisholm, William P.; Kashon, Michael; Key-Schwartz, Rosa J.; Harper, Martin

    2015-01-01

    Kaolinite interferes with the infrared analysis of quartz. Improper correction can cause over- or underestimation of silica concentration. The standard sampling method for quartz in coal mine dust is size selective, and, since infrared spectrometry is sensitive to particle size, it is intuitively better to use the same size fractions for quantification of quartz and kaolinite. Standard infrared spectrometric methods for quartz measurement in coal mine dust correct interference from the kaolinite, but they do not specify a particle size for the material used for correction. This study compares calibration curves using as-received and respirable size fractions of nine different examples of kaolinite in the different correction methods from the National Institute for Occupational Safety and Health Manual of Analytical Methods (NMAM) 7603 and the Mine Safety and Health Administration (MSHA) P-7. Four kaolinites showed significant differences between calibration curves with as-received and respirable size fractions for NMAM 7603 and seven for MSHA P-7. The quartz mass measured in 48 samples spiked with respirable fraction silica and kaolinite ranged between 0.28 and 23% (NMAM 7603) and 0.18 and 26% (MSHA P-7) of the expected applied mass when the kaolinite interference was corrected with respirable size fraction kaolinite. This is termed “deviation,” not bias, because the applied mass is also subject to unknown variance. Generally, the deviations in the spiked samples are larger when corrected with the as-received size fraction of kaolinite than with the respirable size fraction. Results indicate that if a kaolinite correction with reference material of respirable size fraction is applied in current standard methods for quartz measurement in coal mine dust, the quartz result would be somewhat closer to the true exposure, although the actual mass difference would be small. Most kinds of kaolinite can be used for laboratory calibration, but preferably, the size fraction should be the same as the coal dust being collected. PMID:23767881

  7. Consideration of kaolinite interference correction for quartz measurements in coal mine dust.

    PubMed

    Lee, Taekhee; Chisholm, William P; Kashon, Michael; Key-Schwartz, Rosa J; Harper, Martin

    2013-01-01

    Kaolinite interferes with the infrared analysis of quartz. Improper correction can cause over- or underestimation of silica concentration. The standard sampling method for quartz in coal mine dust is size selective, and, since infrared spectrometry is sensitive to particle size, it is intuitively better to use the same size fractions for quantification of quartz and kaolinite. Standard infrared spectrometric methods for quartz measurement in coal mine dust correct interference from the kaolinite, but they do not specify a particle size for the material used for correction. This study compares calibration curves using as-received and respirable size fractions of nine different examples of kaolinite in the different correction methods from the National Institute for Occupational Safety and Health Manual of Analytical Methods (NMAM) 7603 and the Mine Safety and Health Administration (MSHA) P-7. Four kaolinites showed significant differences between calibration curves with as-received and respirable size fractions for NMAM 7603 and seven for MSHA P-7. The quartz mass measured in 48 samples spiked with respirable fraction silica and kaolinite ranged between 0.28 and 23% (NMAM 7603) and 0.18 and 26% (MSHA P-7) of the expected applied mass when the kaolinite interference was corrected with respirable size fraction kaolinite. This is termed "deviation," not bias, because the applied mass is also subject to unknown variance. Generally, the deviations in the spiked samples are larger when corrected with the as-received size fraction of kaolinite than with the respirable size fraction. Results indicate that if a kaolinite correction with reference material of respirable size fraction is applied in current standard methods for quartz measurement in coal mine dust, the quartz result would be somewhat closer to the true exposure, although the actual mass difference would be small. Most kinds of kaolinite can be used for laboratory calibration, but preferably, the size fraction should be the same as the coal dust being collected.

  8. Colony size-frequency distribution of pocilloporid juvenile corals along a natural environmental gradient in the Red Sea.

    PubMed

    Lozano-Cortés, Diego F; Berumen, Michael L

    2016-04-30

    Coral colony size-frequency distributions can be used to assess population responses to local environmental conditions and disturbances. In this study, we surveyed juvenile pocilloporids, herbivorous fish densities, and algal cover in the central and southern Saudi Arabian Red Sea. We sampled nine reefs with different disturbance histories along a north-south natural gradient of physicochemical conditions (higher salinity and wider temperature fluctuations in the north, and higher turbidity and productivity in the south). Since coral populations with negatively skewed size-frequency distributions have been associated with unfavorable environmental conditions, we expected to find more negative distributions in the southern Red Sea, where corals are potentially experiencing suboptimal conditions. Although juvenile coral and parrotfish densities differed significantly between the two regions, mean colony size and size-frequency distributions did not. Results suggest that pocilloporid colony size-frequency distribution may not be an accurate indicator of differences in biological or oceanographic conditions in the Red Sea. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. The control of meal size in human subjects: a role for expected satiety, expected satiation and premeal planning.

    PubMed

    Brunstrom, Jeffrey M

    2011-05-01

    Unlike energy expenditure, energy intake occurs during discrete events: snacks and meals. The prevailing view is that meal size is governed by physiological and psychological events that promote satiation towards the end of a meal. This review explores an alternative and perhaps controversial proposition. Specifically that satiation plays a secondary role, and that meal size (kJ) is controlled by decisions about portion size, before a meal begins. Recently, techniques have been developed that enable us to quantify 'expected satiation' and 'expected satiety' (respectively, the fullness and the respite from hunger that foods are expected to confer). When compared on a kJ-for-kJ basis, these expectations differ markedly across foods. Moreover, in self-selected meals, these measures are remarkably good predictors of the energy content of food that ends up on our plate, even more important than palatability. Expected satiation and expected satiety are influenced by the physical characteristics of a food (e.g. perceived volume). However, they are also learned. Indeed, there is now mounting evidence for 'expected-satiation drift', a general tendency for a food to have higher expected satiation as it increases in familiarity. Together, these findings show that important elements of control (discrimination and learning/adaptation) are clearly evident in plans around portion size. Since most meals are eaten in their entirety, understanding the nature of these controls should be given high priority.

  10. Optimizing trial design in pharmacogenetics research: comparing a fixed parallel group, group sequential, and adaptive selection design on sample size requirements.

    PubMed

    Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit

    2013-01-01

    Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.

  11. Multivariate test power approximations for balanced linear mixed models in studies with missing data.

    PubMed

    Ringham, Brandy M; Kreidler, Sarah M; Muller, Keith E; Glueck, Deborah H

    2016-07-30

    Multilevel and longitudinal studies are frequently subject to missing data. For example, biomarker studies for oral cancer may involve multiple assays for each participant. Assays may fail, resulting in missing data values that can be assumed to be missing completely at random. Catellier and Muller proposed a data analytic technique to account for data missing at random in multilevel and longitudinal studies. They suggested modifying the degrees of freedom for both the Hotelling-Lawley trace F statistic and its null case reference distribution. We propose parallel adjustments to approximate power for this multivariate test in studies with missing data. The power approximations use a modified non-central F statistic, which is a function of (i) the expected number of complete cases, (ii) the expected number of non-missing pairs of responses, or (iii) the trimmed sample size, which is the planned sample size reduced by the anticipated proportion of missing data. The accuracy of the method is assessed by comparing the theoretical results to the Monte Carlo simulated power for the Catellier and Muller multivariate test. Over all experimental conditions, the closest approximation to the empirical power of the Catellier and Muller multivariate test is obtained by adjusting power calculations with the expected number of complete cases. The utility of the method is demonstrated with a multivariate power analysis for a hypothetical oral cancer biomarkers study. We describe how to implement the method using standard, commercially available software products and give example code. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Metallographic Characterization of Wrought Depleted Uranium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forsyth, Robert Thomas; Hill, Mary Ann

    Metallographic characterization was performed on wrought depleted uranium (DU) samples taken from the longitudinal and transverse orientations from specific locations on two specimens. Characterization of the samples included general microstructure, inclusion analysis, grain size analysis, and microhardness testing. Comparisons of the characterization results were made to determine any differences based on specimen, sample orientation, or sample location. In addition, the characterization results for the wrought DU samples were also compared with data obtained from the metallographic characterization of cast DU samples previously characterized. No differences were observed in microstructure, inclusion size, morphology, and distribution, or grain size in regard tomore » specimen, location, or orientation for the wrought depleted uranium samples. However, a small difference was observed in average hardness with regard to orientation at the same locations within the same specimen. The longitudinal samples were slightly harder than the transverse samples from the same location of the same specimen. This was true for both wrought DU specimens. Comparing the wrought DU sample data with the previously characterized cast DU sample data, distinct differences in microstructure, inclusion size, morphology and distribution, grain size, and microhardness were observed. As expected, the microstructure of the wrought DU samples consisted of small recrystallized grains which were uniform, randomly oriented, and equiaxed with minimal twinning observed in only a few grains. In contrast, the cast DU microstructure consisted of large irregularly shaped grains with extensive twinning observed in most grains. Inclusions in the wrought DU samples were elongated, broken and cracked and light and dark phases were observed in some inclusions. The mean inclusion area percentage for the wrought DU samples ranged from 0.08% to 0.34% and the average density from all wrought DU samples was 1.62E+04/cm 2. Inclusions in the cast DU samples were equiaxed and intact with light and dark phases observed in some inclusions. The mean inclusion area percentage for the cast DU samples ranged from 0.93% to 1.00% and the average density from all wrought DU samples was 2.83E+04/cm 2. The average mean grain area from all wrought DU samples was 141 μm 2 while the average mean grain area from all cast DU samples was 1.7 mm2. The average Knoop microhardness from all wrought DU samples was 215 HK and the average Knoop microhardness from all cast DU samples was 264 HK.« less

  13. Occupancy in continuous habitat

    USGS Publications Warehouse

    Efford, Murray G.; Dawson, Deanna K.

    2012-01-01

    The probability that a site has at least one individual of a species ('occupancy') has come to be widely used as a state variable for animal population monitoring. The available statistical theory for estimation when detection is imperfect applies particularly to habitat patches or islands, although it is also used for arbitrary plots in continuous habitat. The probability that such a plot is occupied depends on plot size and home-range characteristics (size, shape and dispersion) as well as population density. Plot size is critical to the definition of occupancy as a state variable, but clear advice on plot size is missing from the literature on the design of occupancy studies. We describe models for the effects of varying plot size and home-range size on expected occupancy. Temporal, spatial, and species variation in average home-range size is to be expected, but information on home ranges is difficult to retrieve from species presence/absence data collected in occupancy studies. The effect of variable home-range size is negligible when plots are very large (>100 x area of home range), but large plots pose practical problems. At the other extreme, sampling of 'point' plots with cameras or other passive detectors allows the true 'proportion of area occupied' to be estimated. However, this measure equally reflects home-range size and density, and is of doubtful value for population monitoring or cross-species comparisons. Plot size is ill-defined and variable in occupancy studies that detect animals at unknown distances, the commonest example being unlimited-radius point counts of song birds. We also find that plot size is ill-defined in recent treatments of "multi-scale" occupancy; the respective scales are better interpreted as temporal (instantaneous and asymptotic) rather than spatial. Occupancy is an inadequate metric for population monitoring when it is confounded with home-range size or detection distance.

  14. Pelvic dimorphism in relation to body size and body size dimorphism in humans.

    PubMed

    Kurki, Helen K

    2011-12-01

    Many mammalian species display sexual dimorphism in the pelvis, where females possess larger dimensions of the obstetric (pelvic) canal than males. This is contrary to the general pattern of body size dimorphism, where males are larger than females. Pelvic dimorphism is often attributed to selection relating to parturition, or as a developmental consequence of secondary sexual differentiation (different allometric growth trajectories of each sex). Among anthropoid primates, species with higher body size dimorphism have higher pelvic dimorphism (in converse directions), which is consistent with an explanation of differential growth trajectories for pelvic dimorphism. This study investigates whether the pattern holds intraspecifically in humans by asking: Do human populations with high body size dimorphism also display high pelvic dimorphism? Previous research demonstrated that in some small-bodied populations, relative pelvic canal size can be larger than in large-bodied populations, while others have suggested that larger-bodied human populations display greater body size dimorphism. Eleven human skeletal samples (total N: male = 229, female = 208) were utilized, representing a range of body sizes and geographical regions. Skeletal measurements of the pelvis and femur were collected and indices of sexual dimorphism for the pelvis and femur were calculated for each sample [ln(M/F)]. Linear regression was used to examine the relationships between indices of pelvic and femoral size dimorphism, and between pelvic dimorphism and female femoral size. Contrary to expectations, the results suggest that pelvic dimorphism in humans is generally not correlated with body size dimorphism or female body size. These results indicate that divergent patterns of dimorphism exist for the pelvis and body size in humans. Implications for the evaluation of the evolution of pelvic dimorphism and rotational childbirth in Homo are considered. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. The effect of precursor types on the magnetic properties of Y-type hexa-ferrite composite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Chin Mo; Na, Eunhye; Kim, Ingyu

    2015-05-07

    With magnetic composite including uniform magnetic particles, we expect to realize good high-frequency soft magnetic properties. We produced needle-like (α-FeOOH) nanoparticles with nearly uniform diameter and length of 20 and 500 nm. Zn-doped Y-type hexa-ferrite samples were prepared by solid state reaction method using the uniform goethite and non-uniform hematite (Fe{sub 2}O{sub 3}) with size of <1 μm, respectively. The micrographs observed by scanning electron microscopy show that more uniform hexagonal plates are observed in ZYG-sample (Zn-doped Y-type hexa-ferrite prepared with non-uniform hematite) than in ZYH-sample (Zn-doped Y-type hexa-ferrite prepared with uniform goethite). The permeability (μ′) and loss tangent (δ) atmore » 2 GHz are 2.31 and 0.07 in ZYG-sample and 2.0 and 0.07 in ZYH sample, respectively. We can observe that permeability and loss tangent are strongly related to the particle size and uniformity based on the nucleation, growth, and two magnetizing mechanisms: spin rotation and domain wall motion. The complex permeability spectra also can be numerically separated into spin rotational and domain wall resonance components.« less

  16. Confidence intervals and sample size calculations for the standardized mean difference effect size between two normal populations under heteroscedasticity.

    PubMed

    Shieh, G

    2013-12-01

    The use of effect sizes and associated confidence intervals in all empirical research has been strongly emphasized by journal publication guidelines. To help advance theory and practice in the social sciences, this article describes an improved procedure for constructing confidence intervals of the standardized mean difference effect size between two independent normal populations with unknown and possibly unequal variances. The presented approach has advantages over the existing formula in both theoretical justification and computational simplicity. In addition, simulation results show that the suggested one- and two-sided confidence intervals are more accurate in achieving the nominal coverage probability. The proposed estimation method provides a feasible alternative to the most commonly used measure of Cohen's d and the corresponding interval procedure when the assumption of homogeneous variances is not tenable. To further improve the potential applicability of the suggested methodology, the sample size procedures for precise interval estimation of the standardized mean difference are also delineated. The desired precision of a confidence interval is assessed with respect to the control of expected width and to the assurance probability of interval width within a designated value. Supplementary computer programs are developed to aid in the usefulness and implementation of the introduced techniques.

  17. Extreme grain size reduction in dolomite: microstructures and mechanisms.

    NASA Astrophysics Data System (ADS)

    Kennedy, L.; White, J. C.

    2007-12-01

    Pure dolomite sample were deformed at room temperature and under a variety of confining pressures (0 - 100MPa) to examine the processes of grain size reduction. The dolomite is composed of > 97 vol. % dolomite with accessory quartz, calcite, tremolite, and muscovite and has been metamorphosed to amphibolite facies and subsequently annealed. At the hand sample scale, the rock is isotropic, except for minor, randomly oriented tremolite porphyroblasts, and weakly aligned muscovite. At the thin section scale, coarser grains have lobate grain boundaries, exhibit minor to no undulose extinction and few deformation twins, although well- developed subgrains are present. Growth twins are common, as is the presence of well developed {1011} cleavage. Mean grain size 476 microns, and porosity is essentially zero (Austin and Kennedy, 2006). Samples contain diagonal to subvertical faults. Fractures are lined with an exceptionally fine-grained, powdered dolomite. Even experiments done at no confining pressure and stopped before sliding on the fracture surfaces occurred had significant powdered gouge developed along the surfaces. In this regard, fracturing of low porosity, pure dolomite, with metamorphic textures (e.g. lobate, interlocking grain boundaries) results in the development of fine-grained gouge. As expected the dolomite exhibited an increase in strength with increasing confining pressure, with a maximum differential stress of ~400MPa at 100 MPa confining pressure. At each chosen confining pressure, two experiments were performed and stopped at different stages along the load-displacement curve: just before yield stress and at peak stress. Microstructures at each stage were observed in order to determine the possible mechanisms for extreme grain size reduction. SEM work shows that in samples with little to no apparent displacement along microfractures, extreme grain size reduction still exists, suggesting that frictional sliding and subsequent cataclasis may not be the mechanism responsible for grain size reduction. Within individual dolomite clasts, apparent Mode I cracks are also lined with powedered gouge. Alternative mechanisms for grain size reduction are explored. Austin et al. 2005, Geological Society, London, Special Publications, 243, 51-66.3.

  18. Geotechnical characterization of mined clay from Appalachian Ohio: challenges and implications for the clay mining industry.

    PubMed

    Moran, Anthony R; Hettiarachchi, Hiroshan

    2011-07-01

    Clayey soil found in coal mines in Appalachian Ohio is often sold to landfills for constructing Recompacted Soil Liners (RSL) in landfills. Since clayey soils possess low hydraulic conductivity, the suitability of mined clay for RSL in Ohio is first assessed by determining its clay content. When soil samples are tested in a laboratory, the same engineering properties are typically expected for the soils originated from the same source, provided that the testing techniques applied are standard, but mined clay from Appalachian Ohio has shown drastic differences in particle size distribution depending on the sampling and/or laboratory processing methods. Sometimes more than a 10 percent decrease in the clay content is observed in the samples collected at the stockpiles, compared to those collected through reverse circulation drilling. This discrepancy poses a challenge to geotechnical engineers who work on the prequalification process of RSL material as it can result in misleading estimates of the hydraulic conductivity of the samples. This paper describes a laboratory investigation conducted on mined clay from Appalachian Ohio to determine how and why the standard sampling and/or processing methods can affect the grain-size distributions. The variation in the clay content was determined to be due to heavy concentrations of shale fragments in the clayey soils. It was also concluded that, in order to obtain reliable grain size distributions from the samples collected at a stockpile of mined clay, the material needs to be processed using a soil grinder. Otherwise, the samples should be collected through drilling.

  19. Geotechnical Characterization of Mined Clay from Appalachian Ohio: Challenges and Implications for the Clay Mining Industry

    PubMed Central

    Moran, Anthony R.; Hettiarachchi, Hiroshan

    2011-01-01

    Clayey soil found in coal mines in Appalachian Ohio is often sold to landfills for constructing Recompacted Soil Liners (RSL) in landfills. Since clayey soils possess low hydraulic conductivity, the suitability of mined clay for RSL in Ohio is first assessed by determining its clay content. When soil samples are tested in a laboratory, the same engineering properties are typically expected for the soils originated from the same source, provided that the testing techniques applied are standard, but mined clay from Appalachian Ohio has shown drastic differences in particle size distribution depending on the sampling and/or laboratory processing methods. Sometimes more than a 10 percent decrease in the clay content is observed in the samples collected at the stockpiles, compared to those collected through reverse circulation drilling. This discrepancy poses a challenge to geotechnical engineers who work on the prequalification process of RSL material as it can result in misleading estimates of the hydraulic conductivity of the samples. This paper describes a laboratory investigation conducted on mined clay from Appalachian Ohio to determine how and why the standard sampling and/or processing methods can affect the grain-size distributions. The variation in the clay content was determined to be due to heavy concentrations of shale fragments in the clayey soils. It was also concluded that, in order to obtain reliable grain size distributions from the samples collected at a stockpile of mined clay, the material needs to be processed using a soil grinder. Otherwise, the samples should be collected through drilling. PMID:21845150

  20. Detection of Amyloid Beta (Aβ) Oligomeric Composition Using Matrix-Assisted Laser Desorption Ionization Mass Spectrometry (MALDI MS)

    NASA Astrophysics Data System (ADS)

    Wang, Jasmine S.-H.; Whitehead, Shawn N.; Yeung, Ken K.-C.

    2018-02-01

    The use of MALDI MS as a fast and direct method to detect the Aβ oligomers of different masses is examined in this paper. Experimental results suggest that Aβ oligomers are ionized and detected as singly charged ions, and thus, the resulting mass spectrum directly reports the oligomer size distribution. Validation experiments were performed to verify the MS data against artifacts. Mass spectra collected from modified Aβ peptides with different propensities for aggregation were compared. Generally, the relative intensities of multimers were higher from samples where oligomerization was expected to be more favorable, and vice versa. MALDI MS was also able to detect the differences in oligomeric composition before and after the incubation/oligomerization step. Such differences in sample composition were also independently confirmed with an in vitro Aβ toxicity study on primary rat cortical neurons. An additional validation was accomplished through removal of oligomers from the sample using molecular weight cutoff filters; the resulting MS data correctly reflected the removal at the expected cutoff points. The results collectively validated the ability of MALDI MS to assess the monomeric/multimeric composition of Aβ samples. [Figure not available: see fulltext.

  1. Separation of cancer cells from white blood cells by pinched flow fractionation.

    PubMed

    Pødenphant, Marie; Ashley, Neil; Koprowska, Kamila; Mir, Kalim U; Zalkovskij, Maksim; Bilenberg, Brian; Bodmer, Walter; Kristensen, Anders; Marie, Rodolphe

    2015-12-21

    In this paper, the microfluidic size-separation technique pinched flow fractionation (PFF) is used to separate cancer cells from white blood cells (WBCs). The cells are separated at efficiencies above 90% for both cell types. Circulating tumor cells (CTCs) are found in the blood of cancer patients and can form new tumors. CTCs are rare cells in blood, but they are important for the understanding of metastasis. There is therefore a high interest in developing a method for the enrichment of CTCs from blood samples, which also enables further analysis of the separated cells. The separation is challenged by the size overlap between cancer cells and the 10(6) times more abundant WBCs. The size overlap prevents high efficiency separation, however we demonstrate that cell deformability can be exploited in PFF devices to gain higher efficiencies than expected from the size distribution of the cells.

  2. The size of an ethno-cultural community as a social determinant of health for Chinese seniors.

    PubMed

    Chau, Shirley; Lai, Daniel W L

    2011-12-01

    The present study investigated the link between the sizes of the Chinese community to the health of Chinese seniors in Canada. A secondary data analysis of survey data from a representative sample of 2,272 Chinese older adults aged 55 and over was conducted. Hierarchical regression analyses were performed to assess the effects of the size of Chinese communities in Chinese seniors' health. Chinese seniors residing in the community with a small Chinese population reported better physical and mental health than the Chinese seniors residing in communities with a larger Chinese population. The findings were contrary to expectations that health of Chinese seniors should be higher in cities with large Chinese communities. These findings raise new questions for future investigations into the dynamics and impact of ethnic community size, and the importance of studying intragroup differences within ethno-cultural groups to better understand health disparities in ethnic groups.

  3. Burning rate for steel-cased, pressed binderless HMX

    NASA Technical Reports Server (NTRS)

    Fifer, R. A.; Cole, J. E.

    1980-01-01

    The burning behavior of pressed binderless HMX laterally confined in 6.4 mm i.d. steel cases was measured over the pressure range 1.45 to 338 MPa in a constant pressure strand burner. The measured regression rates are compared to those reported previously for unconfined samples. It is shown that lateral confinement results in a several-fold decrease in the regression rate for the coarse particle size HMX above the transition to super fast regression. For class E samples, confinement shifts the transition to super fast regression from low pressure to high pressure. These results are interpreted in terms of the previously proposed progressive deconsolidation mechanism. Preliminary holographic photography and closed bomb tests are also described. Theoretical one dimensional modeling calculations were carried out to predict the expected flame height (particle burn out distance) as a function of particle size and pressure for binderless HMX burning by a progressive deconsolidation mechanism.

  4. Two Different Views on the World Around Us: The World of Uniformity versus Diversity

    PubMed Central

    Nayakankuppam, Dhananjay

    2016-01-01

    We propose that when individuals believe in fixed traits of personality (entity theorists), they are likely to expect a world of “uniformity.” As such, they easily infer a population statistic from a small sample of data with confidence. In contrast, individuals who believe in malleable traits of personality (incremental theorists) are likely to presume a world of “diversity,” such that they “hesitate” to infer a population statistic from a similarly sized sample. In four laboratory experiments, we found that compared to incremental theorists, entity theorists estimated a population mean from a sample with a greater level of confidence (Studies 1a and 1b), expected more homogeneity among the entities within a population (Study 2), and perceived an extreme value to be more indicative of an outlier (Study 3). These results suggest that individuals are likely to use their implicit self-theory orientations (entity theory versus incremental theory) to see a population in general as a constitution either of homogeneous or heterogeneous entities. PMID:27977788

  5. Extracting samples of high diversity from thematic collections of large gene banks using a genetic-distance based approach

    PubMed Central

    2010-01-01

    Background Breeding programs are usually reluctant to evaluate and use germplasm accessions other than the elite materials belonging to their advanced populations. The concept of core collections has been proposed to facilitate the access of potential users to samples of small sizes, representative of the genetic variability contained within the gene pool of a specific crop. The eventual large size of a core collection perpetuates the problem it was originally proposed to solve. The present study suggests that, in addition to the classic core collection concept, thematic core collections should be also developed for a specific crop, composed of a limited number of accessions, with a manageable size. Results The thematic core collection obtained meets the minimum requirements for a core sample - maintenance of at least 80% of the allelic richness of the thematic collection, with, approximately, 15% of its size. The method was compared with other methodologies based on the M strategy, and also with a core collection generated by random sampling. Higher proportions of retained alleles (in a core collection of equal size) or similar proportions of retained alleles (in a core collection of smaller size) were detected in the two methods based on the M strategy compared to the proposed methodology. Core sub-collections constructed by different methods were compared regarding the increase or maintenance of phenotypic diversity. No change on phenotypic diversity was detected by measuring the trait "Weight of 100 Seeds", for the tested sampling methods. Effects on linkage disequilibrium between unlinked microsatellite loci, due to sampling, are discussed. Conclusions Building of a thematic core collection was here defined by prior selection of accessions which are diverse for the trait of interest, and then by pairwise genetic distances, estimated by DNA polymorphism analysis at molecular marker loci. The resulting thematic core collection potentially reflects the maximum allele richness with the smallest sample size from a larger thematic collection. As an example, we used the development of a thematic core collection for drought tolerance in rice. It is expected that such thematic collections increase the use of germplasm by breeding programs and facilitate the study of the traits under consideration. The definition of a core collection to study drought resistance is a valuable contribution towards the understanding of the genetic control and the physiological mechanisms involved in water use efficiency in plants. PMID:20576152

  6. Study of magnetic and electrical properties of nanocrystalline Mn doped NiO.

    PubMed

    Raja, S Philip; Venkateswaran, C

    2011-03-01

    Diluted Magnetic Semiconductors (DMS) are intensively explored in recent years for its applications in spintronics, which is expected to revolutionize the present day information technology. Nanocrystalline Mn doped NiO samples were prepared using chemical co-precipitation method with an aim to realize room temperature ferromagnetism. Phase formation of the samples was studied using X-ray diffraction-Rietveld analysis. Scanning electron microscopy and Energy dispersive X-ray analysis results reveal the nanocrystalline nature of the samples, agglomeration of the particles, considerable particle size distribution and the near stoichiometry. Thermomagnetic curves confirm the single-phase formation of the samples up to 1% doping of Mn. Vibrating Sample Magnetometer measurements indicate the absence of ferromagnetism at room temperature. This may be due to the low concentration of Mn2+ ions having weak indirect coupling with Ni2+ ions. The lack of free carriers is also expected to be the reason for the absence of ferromagnetism, which is in agreement with the results of resistivity measurements using impedance spectroscopy. Arrhenius plot shows the presence of two thermally activated regions and the activation energy for the nanocrystalline Mn doped sample was found to be greater than that of undoped NiO. This is attributed to the doping effect of Mn. However, the dielectric constant of the samples was found to be of the same order of magnitude very much comparable with that of undoped NiO.

  7. Decision-theoretic approach to data acquisition for transit operations planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ritchie, S.G.

    The most costly element of transportation planning and modeling activities in the past has usually been that of data acquisition. This is even truer today when the unit costs of data collection are increasing rapidly and at the same time budgets are severely limited by continuing policies of fiscal austerity in the public sector. The overall objectives of this research were to improve the decisions and decision-making capabilities of transit operators or planners in short-range transit planning, and to improve the quality and cost-effectiveness of associated route or corridor-level data collection and service monitoring activities. A new approach was presentedmore » for sequentially updating the parameters of both simple and multiple linear regression models with stochastic regressors, and for determining the expected value of sample information and expected net gain of sampling for associated sample designs. A new approach was also presented for estimating and updating (both spatially and temporally) the parameters of multinomial logit discrete choice models, and for determining associated optimal sample designs for attribute-based and choice-based sampling methods. The approach provides an effective framework for addressing the issue of optimal sampling method and sample size, which to date have been largely unresolved. The application of these methodologies and the feasibility of the decision-theoretic approach was illustrated with a hypothetical case study example.« less

  8. Detection of internal structure by scattered light intensity: Application to kidney cell sorting

    NASA Technical Reports Server (NTRS)

    Goolsby, C. L.; Kunze, M. E.

    1985-01-01

    Scattered light measurements in flow cytometry were sucessfully used to distinguish cells on the basis of differing morphology and internal structure. Differences in scattered light patterns due to changes in internal structure would be expected to occur at large scattering angles. Practically, the results of these calculations suggest that in experimental situations an array of detectors would be useful. Although in general the detection of the scattered light intensity at several intervals within the 10 to 60 region would be sufficient, there are many examples where increased sensitivity could be acheived at other angles. The ability to measure at many different angular intervals would allow the experimenter to empirically select the optimum intervals for the varying conditions of cell size, N/C ratio, granule size and internal structure from sample to sample. The feasibility of making scattered light measurements at many different intervals in flow cytometry was demonstrated. The implementation of simplified versions of these techniques in conjunction with independant measurements of cell size could potentially improve the usefulness of flow cytometry in the study of the internal structure of cells.

  9. Advanced hierarchical distance sampling

    USGS Publications Warehouse

    Royle, Andy

    2016-01-01

    In this chapter, we cover a number of important extensions of the basic hierarchical distance-sampling (HDS) framework from Chapter 8. First, we discuss the inclusion of “individual covariates,” such as group size, in the HDS model. This is important in many surveys where animals form natural groups that are the primary observation unit, with the size of the group expected to have some influence on detectability. We also discuss HDS integrated with time-removal and double-observer or capture-recapture sampling. These “combined protocols” can be formulated as HDS models with individual covariates, and thus they have a commonality with HDS models involving group structure (group size being just another individual covariate). We cover several varieties of open-population HDS models that accommodate population dynamics. On one end of the spectrum, we cover models that allow replicate distance sampling surveys within a year, which estimate abundance relative to availability and temporary emigration through time. We consider a robust design version of that model. We then consider models with explicit dynamics based on the Dail and Madsen (2011) model and the work of Sollmann et al. (2015). The final major theme of this chapter is relatively newly developed spatial distance sampling models that accommodate explicit models describing the spatial distribution of individuals known as Point Process models. We provide novel formulations of spatial DS and HDS models in this chapter, including implementations of those models in the unmarked package using a hack of the pcount function for N-mixture models.

  10. Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature.

    PubMed

    Szucs, Denes; Ioannidis, John P A

    2017-03-01

    We have empirically assessed the distribution of published effect sizes and estimated power by analyzing 26,841 statistical records from 3,801 cognitive neuroscience and psychology papers published recently. The reported median effect size was D = 0.93 (interquartile range: 0.64-1.46) for nominally statistically significant results and D = 0.24 (0.11-0.42) for nonsignificant results. Median power to detect small, medium, and large effects was 0.12, 0.44, and 0.73, reflecting no improvement through the past half-century. This is so because sample sizes have remained small. Assuming similar true effect sizes in both disciplines, power was lower in cognitive neuroscience than in psychology. Journal impact factors negatively correlated with power. Assuming a realistic range of prior probabilities for null hypotheses, false report probability is likely to exceed 50% for the whole literature. In light of our findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience.

  11. Size Effects in the Resistivity of Kondo and Spin-Glass Wires

    NASA Astrophysics Data System (ADS)

    van Haesendonck, Chris

    1998-03-01

    Recently, several experiments have focused on possible size effects for the Kondo scattering in thin-film structures of very dilute magnetic alloys (concentration ~ 100 ppm) (For an overview, see M.A. Blachly and N. Giordano, Phys. Rev. B 51), 12537 (1995).. Intuitively, one expects size effects to occur as soon as the sample dimensions become smaller than the size of the Kondo screening cloud which induces a compensation of the local magnetic moments below the Kondo temperature. Since the size of the Kondo cloud is predicted to be of the order of 1 μ m, one should be able to observe pronounced size effects in thin-film Kondo alloys which have been patterned by standard electron beam lithography. Experiments performed by other groups have indeed revealed an important reduction of the slope of the Kondo resistivity for samples with micrometer dimensions. These experiments also show that the size effects are affected by disorder. On the other hand, our experiments on AuFe wires, which have been prepared by flash evaporation as well as by ion implantation, indicate the absence of size effects for the Kondo scattering down to a width of 38 nm. Therefore, the existence of the Kondo cloud remains a controversial issue. The size effects have also been investigated for more concentrated spin-glass alloys (concentration ~ 1 at.%). The resistivity measurements of thin-film spin glasses indicate that intrinsic size effects may be present for length scales below 100 nm (K.R. Lane et al., Phys. Rev. B 51), 945 (1995); G. Neuttiens et al., Europhys. Lett. 34, 617 (1996).. Due to the damping of the RKKY interaction by elastic defect scattering, size effects in the spin-glass regime can be strongly affected by disorder.

  12. Parent perspectives on attrition from tertiary care pediatric weight management programs.

    PubMed

    Hampl, Sarah; Demeule, Michelle; Eneli, Ihuoma; Frank, Maura; Hawkins, Mary Jane; Kirk, Shelley; Morris, Patricia; Sallinen, Bethany J; Santos, Melissa; Ward, Wendy L; Rhodes, Erinn

    2013-06-01

    To describe parent/caregiver reasons for attrition from tertiary care weight management clinics/programs. A telephone survey was administered to 147 parents from weight management clinics/programs in the National Association of Children's Hospitals and Related Institutions' (now Children's Hospital Association's) FOCUS on a Fitter Future II collaborative. Scheduling, barriers to recommendation implementation, and transportation issues were endorsed by more than half of parents as having a moderate to high influence on their decision not to return. Family motivation and mismatched expectations between families and clinic/program staff were mentioned as influential by more than one-third. Only mismatched expectations correlated with patient demographics and program characteristics. [corrected]. Although limited by small sample size, the study found that parents who left geographically diverse weight management clinics/programs reported similar reasons for attrition. Future efforts should include offering alternative visit times, more treatment options, and financial and transportation assistance and exploring family expectations.

  13. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    PubMed

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  14. Characterization of single grain by observing magnetic ejection and rotation in microgravity

    NASA Astrophysics Data System (ADS)

    Uyeda, Chiaki

    A simple and nondestructive method to perform material identification on a single particle is desired in various fields of material science that is concerned with nano-sized particles. We propose a method of identification based on magnetization data, which is obtained from field-induced translation and rotation in microgravity [1]. Material identification is possible from magnetization data because an intrinsic value of susceptibility and anisotropy is assigned to every material according to a data book that compiles the published values [2]. Preliminary ob-servation on free translational motion due to repulsive field-gradient force was reported for mm-sized crystal of corundum [1] and other oxides. Rotational oscillation was observed for various diamagnetic single-crystals in homogeneous field [2]. In order to examine the capability of the above-mentioned material characterization, translation and rotation motion was observed for sub-millimeter-sized quartz, calcite and forsterite in microgravity condition (MGLAB, Japan, duration: 4.5s). It is expected from motional equations that the 2 motions are independent to mass of particles, In a given field distribution, acceleration of translation is expected to be uniquely determined from intrinsic susceptibility of sample. The above properties are exam-ined in the present work by varying experimental parameters. It is noted that observation of the above two motions in microgravity serve as a useful method to detect magnetization of single small particles, be cause the system is free of both sample holder and mass measure-ment. It is expected that magnetization can be measured on a isolated small sample down to nano-level, in condition that motion of the sample is observable. For both susceptibility and anisotropy, range of observed values using microgravity cover the range of compiled published values [2]. Hence material identification is possible for solid material in general. Diamagnetic magnetization and its anisotropy derive from three-dimensional distribution of localized elec-trons. In case of organic materials, origin of magnetization was consistently explained in terms of molecular-orbital method. The investigation was not performed on oxide crystals partly because the experimental values were not reported for most of the material[4]. Improvement of sensitivity using microgravity condition was necessary in order to understand the overall relationship between electron distribution and anisotropy of susceptibility. [1] K. Hisayoshi et al: J.Phys.: Conf. Ser., (2009) 156 012021. [2] R. Guputa: "Landort Bornstein" New Series II (1983) 445. [3]C.Uyeda et al.(206)Jpn.J.appl.Phys.43 L124 [4]C.Uyeda et al.: Appl. Phys. Lett. (1983) 094103.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fitriana, Karina Nur, E-mail: nurfitriana.karina@gmail.com; Hafizah, Mas Ayu Elita, E-mail: kemasayu@yahoo.com; Manaf, Azwar, E-mail: azwar@ui.ac.id

    Synthesis of single phased SrO.6Fe{sub 2-x}Mn{sub x/2}Ti{sub x/2}O{sub 3} (x = 0.0; 0.5; and 1.0) nanoparticles has been prepared through mechanical alloying, assisted with the ultrasonic destruction process. Monocrystalline particles were obtained when x = 0 treated with ultrasonic destruction at 55 μm of transducer amplitude. Average particle size and crystallite size were reduced significantly from 723 nm to ∼87 nm for x = 0. The particle size was not significantly reduced when x = 0.5 and x = 1 was changed. On the other hand, substitution of Ti elements on some of Fe elements expectedly had a major effectmore » on reducing particle size. This was proven by larger size on both particle and crystallite size at x = 1 rather than x = 0.5, with comparison respectively 2:1 (in nm). In addition, a higher transducer power was required for modifying Strontium Hexaferrite (SHF) with more Ti elements and a bigger size of pre-ultrasonic destructed sample. It is concluded that the amplitude of the transducer in ultrasonic destruction process and the element of ionic substitution affects both average particle size and crystallite size of SHF.« less

  16. Drying of Floodplain Forests Associated with Water-Level Decline in the Apalachicola River, Florida - Interim Results, 2006

    USGS Publications Warehouse

    Darst, Melanie R.; Light, Helen M.

    2007-01-01

    Floodplain forests of the Apalachicola River, Florida, are drier in composition today (2006) than they were before 1954, and drying is expected to continue for at least the next 50 years. Drier forest composition is probably caused by water-level declines that occurred as a result of physical changes in the main channel after 1954 and decreased flows in spring and summer months since the 1970s. Forest plots sampled from 2004 to 2006 were compared to forests sampled in the late 1970s (1976-79) using a Floodplain Index (FI) based on species dominance weighted by the Floodplain Species Category, a value that represents the tolerance of tree species to inundation and saturation in the floodplain and consequently, the typical historic floodplain habitat for that species. Two types of analyses were used to determine forest changes over time: replicate plot analysis comparing present (2004-06) canopy composition to late 1970s canopy composition at the same locations, and analyses comparing the composition of size classes of trees on plots in late 1970s and in present forests. An example of a size class analysis would be a comparison of the composition of the entire canopy (all trees greater than 7.5 cm (centimeter) diameter at breast height (dbh)) to the composition of the large canopy tree size class (greater than or equal to 25 cm dbh) at one location. The entire canopy, which has a mixture of both young and old trees, is probably indicative of more recent hydrologic conditions than the large canopy, which is assumed to have fewer young trees. Change in forest composition from the pre-1954 period to approximately 2050 was estimated by combining results from three analyses. The composition of pre-1954 forests was represented by the large canopy size class sampled in the late 1970s. The average FI for canopy trees was 3.0 percent drier than the average FI for the large canopy tree size class, indicating that the late 1970s forests were 3.0 percent drier than pre-1954 forests. The change from the late 1970s to the present was based on replicate plot analysis. The composition of 71 replicate plots sampled from 2004 to 2006 averaged 4.4 percent drier than forests sampled in the late 1970s. The potential composition of future forests (2050 or later) was estimated from the composition of the present subcanopy tree size class (less than 7.5 cm and greater than or equal to 2.5 cm dbh), which contains the greatest percentage of young trees and is indicative of recent hydrologic conditions. Subcanopy trees are the driest size class in present forests, with FIs averaging 31.0 percent drier than FIs for all canopy trees. Based on results from all three sets of data, present floodplain forests average 7.4 percent drier in composition than pre-1954 forests and have the potential to become at least 31.0 percent drier in the future. An overall total change in floodplain forests to an average composition 38.4 percent drier than pre-1954 forests is expected within approximately 50 years. The greatest effects of water-level decline have occurred in tupelo-cypress swamps where forest composition has become at least 8.8 percent drier in 2004-06 than in pre-1954 years. This change indicates that a net loss of swamps has already occurred in the Apalachicola River floodplain, and further losses are expected to continue over the next 50 years. Drying of floodplain forests will result in some low bottomland hardwood forests changing in composition to high bottomland hardwood forests. The composition of high bottomland hardwoods will also change, although periodic flooding is still occurring and will continue to limit most of the floodplain to bottomland hardwood species that are adapted to at least short periods of inundation and saturation.

  17. Moessbauer Characterization of Magnetite/Polyaniline Magnetic Nanocomposite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodriguez, Anselmo F. R.; Faria, Fernando S. E. D. V.; Lopez, Jorge L.

    2010-12-02

    Aniline surface coated Fe{sub 3}O{sub 4} nanoparticles have been successfully synthesized by UV irradiation varying the time and the acid media (HCl, HNO{sub 3}, or H{sub 2}SO{sub 4}). The synthesized material represents a promising platform for application in nerve regeneration. XRD patterns are consistent with the crystalline structure of magnetite. Nevertheless, for UV irradiation times longer than 2 h, extra XRD lines reveal the presence of goethite. The mean crystallite size of uncoated particles is estimated to be 25.4 nm, meanwhile that size is reduced to 19.9 nm for the UV irradiated sample in HCl medium for 4 h. Moessbauermore » spectra of uncoated nanoparticles reveal the occurrence of thermal relaxation at room temperature, while the 77 K-Moessbauer spectrum suggests the occurrence of electron localization effects similar to that expected in bulk magnetite. The Mossbauer spectra of UV irradiated sample in HCl medium during 4 h, confirms the presence of the goethite phase. For this sample, the thermal relaxation is more evident, since the room temperature spectrum shows larger spectral area for the nonmagnetic component due to the smaller crystallite size. Meanwhile, the 77 K-Moessbauer spectrum suggests the absence of the electron localization effect above 77 K.« less

  18. A Meta-Analysis on Antecedents and Outcomes of Detachment from Work.

    PubMed

    Wendsche, Johannes; Lohmann-Haislah, Andrea

    2016-01-01

    Detachment from work has been proposed as an important non-work experience helping employees to recover from work demands. This meta-analysis (86 publications, k = 91 independent study samples, N = 38,124 employees) examined core antecedents and outcomes of detachment in employee samples. With regard to outcomes, results indicated average positive correlations between detachment and self-reported mental (i.e., less exhaustion, higher life satisfaction, more well-being, better sleep) and physical (i.e., lower physical discomfort) health, state well-being (i.e., less fatigue, higher positive affect, more intensive state of recovery), and task performance (small to medium sized effects). However, average relationships between detachment and physiological stress indicators and work motivation were not significant while associations with contextual performance and creativity were significant, but negative. Concerning work characteristics, as expected, job demands were negatively related and job resources were positively related to detachment (small sized effects). Further, analyses revealed that person characteristics such as negative affectivity/neuroticism (small sized effect) and heavy work investment (medium sized effect) were negatively related to detachment whereas detachment and demographic variables (i.e., age and gender) were not related. Moreover, we found a medium sized average negative relationship between engagement in work-related activities during non-work time and detachment. For most of the examined relationships heterogeneity of effect sizes was moderate to high. We identified study design, samples' gender distribution, and affective valence of work-related thoughts as moderators for some of these aforementioned relationships. The results of this meta-analysis point to detachment as a non-work (recovery) experience that is influenced by work-related and personal characteristics which in turn is relevant for a range of employee outcomes.

  19. A Meta-Analysis on Antecedents and Outcomes of Detachment from Work

    PubMed Central

    Wendsche, Johannes; Lohmann-Haislah, Andrea

    2017-01-01

    Detachment from work has been proposed as an important non-work experience helping employees to recover from work demands. This meta-analysis (86 publications, k = 91 independent study samples, N = 38,124 employees) examined core antecedents and outcomes of detachment in employee samples. With regard to outcomes, results indicated average positive correlations between detachment and self-reported mental (i.e., less exhaustion, higher life satisfaction, more well-being, better sleep) and physical (i.e., lower physical discomfort) health, state well-being (i.e., less fatigue, higher positive affect, more intensive state of recovery), and task performance (small to medium sized effects). However, average relationships between detachment and physiological stress indicators and work motivation were not significant while associations with contextual performance and creativity were significant, but negative. Concerning work characteristics, as expected, job demands were negatively related and job resources were positively related to detachment (small sized effects). Further, analyses revealed that person characteristics such as negative affectivity/neuroticism (small sized effect) and heavy work investment (medium sized effect) were negatively related to detachment whereas detachment and demographic variables (i.e., age and gender) were not related. Moreover, we found a medium sized average negative relationship between engagement in work-related activities during non-work time and detachment. For most of the examined relationships heterogeneity of effect sizes was moderate to high. We identified study design, samples' gender distribution, and affective valence of work-related thoughts as moderators for some of these aforementioned relationships. The results of this meta-analysis point to detachment as a non-work (recovery) experience that is influenced by work-related and personal characteristics which in turn is relevant for a range of employee outcomes. PMID:28133454

  20. Critical appraisal of arguments for the delayed-start design proposed as alternative to the parallel-group randomized clinical trial design in the field of rare disease.

    PubMed

    Spineli, Loukia M; Jenz, Eva; Großhennig, Anika; Koch, Armin

    2017-08-17

    A number of papers have proposed or evaluated the delayed-start design as an alternative to the standard two-arm parallel group randomized clinical trial (RCT) design in the field of rare disease. However the discussion is felt to lack a sufficient degree of consideration devoted to the true virtues of the delayed start design and the implications either in terms of required sample-size, overall information, or interpretation of the estimate in the context of small populations. To evaluate whether there are real advantages of the delayed-start design particularly in terms of overall efficacy and sample size requirements as a proposed alternative to the standard parallel group RCT in the field of rare disease. We used a real-life example to compare the delayed-start design with the standard RCT in terms of sample size requirements. Then, based on three scenarios regarding the development of the treatment effect over time, the advantages, limitations and potential costs of the delayed-start design are discussed. We clarify that delayed-start design is not suitable for drugs that establish an immediate treatment effect, but for drugs with effects developing over time, instead. In addition, the sample size will always increase as an implication for a reduced time on placebo resulting in a decreased treatment effect. A number of papers have repeated well-known arguments to justify the delayed-start design as appropriate alternative to the standard parallel group RCT in the field of rare disease and do not discuss the specific needs of research methodology in this field. The main point is that a limited time on placebo will result in an underestimated treatment effect and, in consequence, in larger sample size requirements compared to those expected under a standard parallel-group design. This also impacts on benefit-risk assessment.

  1. Mating System and Effective Population Size of the Overexploited Neotropical Tree (Myroxylon peruiferum L.f.) and Their Impact on Seedling Production.

    PubMed

    Silvestre, Ellida de Aguiar; Schwarcz, Kaiser Dias; Grando, Carolina; de Campos, Jaqueline Bueno; Sujii, Patricia Sanae; Tambarussi, Evandro Vagner; Macrini, Camila Menezes Trindade; Pinheiro, José Baldin; Brancalion, Pedro Henrique Santin; Zucchi, Maria Imaculada

    2018-03-16

    The reproductive system of a tree species has substantial impact on genetic diversity and structure within and among natural populations. Such information, should be considered when planning tree planting for forest restoration. Here, we describe the mating system and genetic diversity of an overexploited Neotropical tree, Myroxylon peruiferum L.f. (Fabaceae) sampled from a forest remnant (10 seed trees and 200 seeds) and assess whether the effective population size of nursery-grown seedlings (148 seedlings) is sufficient to prevent inbreeding depression in reintroduced populations. Genetic analyses were performed based on 8 microsatellite loci. M. peruiferum presented a mixed mating system with evidence of biparental inbreeding (t^m-t^s = 0.118). We found low levels of genetic diversity for M. peruiferum species (allelic richness: 1.40 to 4.82; expected heterozygosity: 0.29 to 0.52). Based on Ne(v) within progeny, we suggest a sample size of 47 seed trees to achieve an effective population size of 100. The effective population sizes for the nursery-grown seedlings were much smaller Ne = 27.54-34.86) than that recommended for short term Ne ≥ 100) population conservation. Therefore, to obtain a reasonable genetic representation of native tree species and prevent problems associated with inbreeding depression, seedling production for restoration purposes may require a much larger sampling effort than is currently used, a problem that is further complicated by species with a mixed mating system. This study emphasizes the need to integrate species reproductive biology into seedling production programs and connect conservation genetics with ecological restoration.

  2. The Kepler Mission: Search for Habitable Planets

    NASA Technical Reports Server (NTRS)

    Borucki, William; Likins, B.; DeVincenzi, Donald L. (Technical Monitor)

    1998-01-01

    Detecting extrasolar terrestrial planets orbiting main-sequence stars is of great interest and importance. Current ground-based methods are only capable of detecting objects about the size or mass of Jupiter or larger. The difficulties encountered with direct imaging of Earth-size planets from space are expected to be resolved in the next twenty years. Spacebased photometry of planetary transits is currently the only viable method for detection of terrestrial planets (30-600 times less massive than Jupiter). This method searches the extended solar neighborhood, providing a statistically large sample and the detailed characteristics of each individual case. A robust concept has been developed and proposed as a Discovery-class mission. Its capabilities and strengths are presented.

  3. Search for Fluid Inclusions in a Carbonaceous Chondrite Using a New X-Ray Micro-Tomography Technique Combined with FIB Sampling

    NASA Technical Reports Server (NTRS)

    Tsuchiyama, A.; Miyake, A.; Zolensky, M. E.; Uesugi, K.; Nakano, T.; Takeuchi, A.; Suzuki, Y.; Yoshida, K.

    2014-01-01

    Early solar system aqueous fluids are preserved in some H chondrites as aqueous fluid inclusions in halite (e.g., [1]). Although potential fluid inclusions are also expected in carbonaceous chondrites [2], they have not been surely confirmed. In order to search for these fluid inclusions, we have developped a new X-ray micro-tomography technique combined with FIB sampling and applied this techniqu to a carbanaceous chondrite. Experimental: A polished thin section of Sutter's Mill meteorite (CM) was observed with an optical microscope and FE-SEM (JEOL 7001F) for chosing mineral grains of carbonates (mainly calcite) and sulfides (FeS and ZnS) 20-50 microns in typical size, which may have aqueous fluid inclusions. Then, a "house" similar to a cube with a roof (20-30 microns in size) is sampled from the mineral grain by using FIB (FEI Quanta 200 3DS). Then, the house was atached to a thin W-needle by FIB and imaged by a SR-based imaging microtomography system with a Fresnel zone plate at beamline BL47XU, SPring-8, Japan. One sample was imaged at two X-ray energies, 7 and 8 keV, to identify mineral phases (dual-enegy microtomography: [3]). The size of voxel (pixel in 3D) was 50-80 nm, which gave the effective spatial resolution of approx. 200 nm. A terrestrial quartz sample with an aqueous fluid inclusion with a bubble was also examined as a test sample by the same method. Results and discussion: A fluid inclusion of 5-8 microns in quartz was clearly identified in a CT image. A bubble of approx. 4 microns was also identified as refraction contrast although the X-ray absorption difference between fluid and bubble is small. Volumes of the fluid and bubble were obtained from the 3D CT images. Fourteen grains of calcite, two grains of iron sulfide and one grain of (Zn,Fe)S were examined. Ten calcite, one iron sulfide and one (Zn,Fe)S grains have inclusions >1 micron in size (the maximum: approx. 5 microns). The shapes are spherical or irregular. Tiny inclusions (<1 micron) are also present in all the grains examined. These results show that mineral grains have more inclusions than expected from 2D observations. The X-ray absorption of the inclusions shows that they are not solid inclusions. No bubbles were observed inside, indicating that we cannot determine whether they are really aqueous fluids or merely voids. One calcite grain has an inclusion approx. 2 microns in size, which seems to have a bubble and a tiny solid daughter crystal inside (three-phase inclusion). As we know the exact 3D position of the inclusion, we will anlyze the inclusion by SIMS after freezing the sample as has been done for a halite sample [3]. The present technique is useful for finding small inclusions not only in carbonaceous chondrites but also for terrestrial materials.

  4. Anomalous permittivity in fine-grain barium titanate

    NASA Astrophysics Data System (ADS)

    Ostrander, Steven Paul

    Fine-grain barium titanate capacitors exhibit anomalously large permittivity. It is often observed that these materials will double or quadruple the room temperature permittivity of a coarse-grain counterpart. However, aside from a general consensus on this permittivity enhancement, the properties of the fine-grain material are poorly understood. This thesis examines the effect of grain size on dielectric properties of a self-consistent set of high density undoped barium titanate capacitors. This set included samples with grain sizes ranging from submicron to ˜20 microns, and with densities generally above 95% of the theoretical. A single batch of well characterized powder was milled, dry-pressed then isostatically-pressed. Compacts were fast-fired, but sintering temperature alone was used to control the grain size. With this approach, the extrinsic influences are minimized within the set of samples, but more importantly, they are normalized between samples. That is, with a single batch of powder and with identical green processing, uniform impurity concentration is expected. The fine-grain capacitors exhibited a room temperature permittivity of ˜5500 and dielectric losses of ˜2%. The Curie-temperature decreased by {˜}5sp°C from that of the coarse-grain material, and the two ferroelectric-ferroelectric phase transition temperatures increased by {˜}10sp°C. The grain size induced permittivity enhancement was only active in the tetragonal and orthorhombic phases. Strong dielectric anomalies were observed in samples with grain size as small as {˜}0.4\\ mum. It is suggested that the strong first-order character observed in the present data is related to control of microstructure and stoichiometry. Grain size effects on conductivity losses, ferroelectric losses, ferroelectric dispersion, Maxwell-Wagner dispersion, and dielectric aging of permittivity and loss were observed. For the fine-grain material, these observations suggest the suppression of domain wall motion below the Curie transition, and the suppression of conductivity above the Curie transition.

  5. Pearson-type goodness-of-fit test with bootstrap maximum likelihood estimation.

    PubMed

    Yin, Guosheng; Ma, Yanyuan

    2013-01-01

    The Pearson test statistic is constructed by partitioning the data into bins and computing the difference between the observed and expected counts in these bins. If the maximum likelihood estimator (MLE) of the original data is used, the statistic generally does not follow a chi-squared distribution or any explicit distribution. We propose a bootstrap-based modification of the Pearson test statistic to recover the chi-squared distribution. We compute the observed and expected counts in the partitioned bins by using the MLE obtained from a bootstrap sample. This bootstrap-sample MLE adjusts exactly the right amount of randomness to the test statistic, and recovers the chi-squared distribution. The bootstrap chi-squared test is easy to implement, as it only requires fitting exactly the same model to the bootstrap data to obtain the corresponding MLE, and then constructs the bin counts based on the original data. We examine the test size and power of the new model diagnostic procedure using simulation studies and illustrate it with a real data set.

  6. Social Awareness and Action Training (SAAT)

    DTIC Science & Technology

    2015-06-01

    scheduled for September, 2013, and the one -year follow-in June, 2014. o Preliminary analyses of the pretest - posttest data from Fort Sill and JBLM...training session ( pretest , Time 1) and immediately after the last training session ( posttest , Time 2). The sample size was estimated based on an expected...reverse worded items. As noted in Figure 1, data from 20 soldiers on the pretest or posttest (11 from the SRT, 9 from the CAT) were judged to be of

  7. An investigation of microstructural characteristics of contact-lens polymers

    NASA Technical Reports Server (NTRS)

    Singh, Jag J.; Eftekhari, Abe; Upchurch, Billy T.; Burns, Karen S.

    1990-01-01

    The free volume and gas permeability in several contact lens specimens were measured as part of a Space Commercialization Program. Free volume was measured using positron lifetime spectroscopy, while permeability for O2, N2, CO2 gases was measured using mass spectrometry and polarography. Permeability for all gases increases with the mean free volume cell size in the test samples. As might be expected, the specimens with the highest free volume fraction also exhibit the lowest Rockwell Hardness Number. An interesting corollary is the finding that the presence of fluorine atoms in the lens chemical structure inhibits filling up of their free volume cells. This is expected to allow the lenses to breathe freely while in actual use.

  8. BRIEF-REPORT New set of microsatellites for Chinese tallow tree, Triadica sebifera.

    PubMed

    Zhuang, Y F; Wang, Z F; Wu, L F

    2017-04-05

    Chinese tallow (Triadica sebifera) is an important crop and ornamental tree. After it was introduced into the USA, it gradually became a noxious invasive tree in south-eastern America since the middle of the 1900s. Because only six microsatellites were reported previously in T. sebifera, to better understand the genetic diversity and population dynamics of such species, we reported here 28 new microsatellite markers. For these 28 microsatellites, the number of alleles per locus ranged from 2-16. The expected heterozygosity and the expected heterozygosity corrected for sample size varied from 0.0796 to 0.9081 and from 0.0805 to 0.9176, respectively. These microsatellites will provide additional choice to investigate the genetic diversity and structure in T. sebifera.

  9. Fundamental quantum noise mapping with tunnelling microscopes tested at surface structures of subatomic lateral size.

    PubMed

    Herz, Markus; Bouvron, Samuel; Ćavar, Elizabeta; Fonin, Mikhail; Belzig, Wolfgang; Scheer, Elke

    2013-10-21

    We present a measurement scheme that enables quantitative detection of the shot noise in a scanning tunnelling microscope while scanning the sample. As test objects we study defect structures produced on an iridium single crystal at low temperatures. The defect structures appear in the constant current images as protrusions with curvature radii well below the atomic diameter. The measured power spectral density of the noise is very near to the quantum limit with Fano factor F = 1. While the constant current images show detailed structures expected for tunnelling involving d-atomic orbitals of Ir, we find the current noise to be without pronounced spatial variation as expected for shot noise arising from statistically independent events.

  10. Efficient estimation of Pareto model: Some modified percentile estimators.

    PubMed

    Bhatti, Sajjad Haider; Hussain, Shahzad; Ahmad, Tanvir; Aslam, Muhammad; Aftab, Muhammad; Raza, Muhammad Ali

    2018-01-01

    The article proposes three modified percentile estimators for parameter estimation of the Pareto distribution. These modifications are based on median, geometric mean and expectation of empirical cumulative distribution function of first-order statistic. The proposed modified estimators are compared with traditional percentile estimators through a Monte Carlo simulation for different parameter combinations with varying sample sizes. Performance of different estimators is assessed in terms of total mean square error and total relative deviation. It is determined that modified percentile estimator based on expectation of empirical cumulative distribution function of first-order statistic provides efficient and precise parameter estimates compared to other estimators considered. The simulation results were further confirmed using two real life examples where maximum likelihood and moment estimators were also considered.

  11. Optical extinction dependence on wavelength and size distribution of airborne dust

    NASA Astrophysics Data System (ADS)

    Pangle, Garrett E.; Hook, D. A.; Long, Brandon J. N.; Philbrick, C. R.; Hallen, Hans D.

    2013-05-01

    The optical scattering from laser beams propagating through atmospheric aerosols has been shown to be very useful in describing air pollution aerosol properties. This research explores and extends that capability to particulate matter. The optical properties of Arizona Road Dust (ARD) samples are measured in a chamber that simulates the particle dispersal of dust aerosols in the atmospheric environment. Visible, near infrared, and long wave infrared lasers are used. Optical scattering measurements show the expected dependence of laser wavelength and particle size on the extinction of laser beams. The extinction at long wavelengths demonstrates reduced scattering, but chemical absorption of dust species must be considered. The extinction and depolarization of laser wavelengths interacting with several size cuts of ARD are examined. The measurements include studies of different size distributions, and their evolution over time is recorded by an Aerodynamic Particle Sizer. We analyze the size-dependent extinction and depolarization of ARD. We present a method of predicting extinction for an arbitrary ARD size distribution. These studies provide new insights for understanding the optical propagation of laser beams through airborne particulate matter.

  12. Thoracic and respirable particle definitions for human health risk assessment.

    PubMed

    Brown, James S; Gordon, Terry; Price, Owen; Asgharian, Bahman

    2013-04-10

    Particle size-selective sampling refers to the collection of particles of varying sizes that potentially reach and adversely affect specific regions of the respiratory tract. Thoracic and respirable fractions are defined as the fraction of inhaled particles capable of passing beyond the larynx and ciliated airways, respectively, during inhalation. In an attempt to afford greater protection to exposed individuals, current size-selective sampling criteria overestimate the population means of particle penetration into regions of the lower respiratory tract. The purpose of our analyses was to provide estimates of the thoracic and respirable fractions for adults and children during typical activities with both nasal and oral inhalation, that may be used in the design of experimental studies and interpretation of health effects evidence. We estimated the fraction of inhaled particles (0.5-20 μm aerodynamic diameter) penetrating beyond the larynx (based on experimental data) and ciliated airways (based on a mathematical model) for an adult male, adult female, and a 10 yr old child during typical daily activities and breathing patterns. Our estimates show less penetration of coarse particulate matter into the thoracic and gas exchange regions of the respiratory tract than current size-selective criteria. Of the parameters we evaluated, particle penetration into the lower respiratory tract was most dependent on route of breathing. For typical activity levels and breathing habits, we estimated a 50% cut-size for the thoracic fraction at an aerodynamic diameter of around 3 μm in adults and 5 μm in children, whereas current ambient and occupational criteria suggest a 50% cut-size of 10 μm. By design, current size-selective sample criteria overestimate the mass of particles generally expected to penetrate into the lower respiratory tract to provide protection for individuals who may breathe orally. We provide estimates of thoracic and respirable fractions for a variety of breathing habits and activities that may benefit the design of experimental studies and interpretation of particle size-specific health effects.

  13. Thoracic and respirable particle definitions for human health risk assessment

    PubMed Central

    2013-01-01

    Background Particle size-selective sampling refers to the collection of particles of varying sizes that potentially reach and adversely affect specific regions of the respiratory tract. Thoracic and respirable fractions are defined as the fraction of inhaled particles capable of passing beyond the larynx and ciliated airways, respectively, during inhalation. In an attempt to afford greater protection to exposed individuals, current size-selective sampling criteria overestimate the population means of particle penetration into regions of the lower respiratory tract. The purpose of our analyses was to provide estimates of the thoracic and respirable fractions for adults and children during typical activities with both nasal and oral inhalation, that may be used in the design of experimental studies and interpretation of health effects evidence. Methods We estimated the fraction of inhaled particles (0.5-20 μm aerodynamic diameter) penetrating beyond the larynx (based on experimental data) and ciliated airways (based on a mathematical model) for an adult male, adult female, and a 10 yr old child during typical daily activities and breathing patterns. Results Our estimates show less penetration of coarse particulate matter into the thoracic and gas exchange regions of the respiratory tract than current size-selective criteria. Of the parameters we evaluated, particle penetration into the lower respiratory tract was most dependent on route of breathing. For typical activity levels and breathing habits, we estimated a 50% cut-size for the thoracic fraction at an aerodynamic diameter of around 3 μm in adults and 5 μm in children, whereas current ambient and occupational criteria suggest a 50% cut-size of 10 μm. Conclusions By design, current size-selective sample criteria overestimate the mass of particles generally expected to penetrate into the lower respiratory tract to provide protection for individuals who may breathe orally. We provide estimates of thoracic and respirable fractions for a variety of breathing habits and activities that may benefit the design of experimental studies and interpretation of particle size-specific health effects. PMID:23575443

  14. Exploring effective sampling design for monitoring soil organic carbon in degraded Tibetan grasslands.

    PubMed

    Chang, Xiaofeng; Bao, Xiaoying; Wang, Shiping; Zhu, Xiaoxue; Luo, Caiyun; Zhang, Zhenhua; Wilkes, Andreas

    2016-05-15

    The effects of climate change and human activities on grassland degradation and soil carbon stocks have become a focus of both research and policy. However, lack of research on appropriate sampling design prevents accurate assessment of soil carbon stocks and stock changes at community and regional scales. Here, we conducted an intensive survey with 1196 sampling sites over an area of 190 km(2) of degraded alpine meadow. Compared to lightly degraded meadow, soil organic carbon (SOC) stocks in moderately, heavily and extremely degraded meadow were reduced by 11.0%, 13.5% and 17.9%, respectively. Our field survey sampling design was overly intensive to estimate SOC status with a tolerable uncertainty of 10%. Power analysis showed that the optimal sampling density to achieve the desired accuracy would be 2, 3, 5 and 7 sites per 10 km(2) for lightly, moderately, heavily and extremely degraded meadows, respectively. If a subsequent paired sampling design with the optimum sample size were performed, assuming stock change rates predicted by experimental and modeling results, we estimate that about 5-10 years would be necessary to detect expected trends in SOC in the top 20 cm soil layer. Our results highlight the utility of conducting preliminary surveys to estimate the appropriate sampling density and avoid wasting resources due to over-sampling, and to estimate the sampling interval required to detect an expected sequestration rate. Future studies will be needed to evaluate spatial and temporal patterns of SOC variability. Copyright © 2016. Published by Elsevier Ltd.

  15. Interactions between target location and reward size modulate the rate of microsaccades in monkeys

    PubMed Central

    Tokiyama, Stefanie; Lisberger, Stephen G.

    2015-01-01

    We have studied how rewards modulate the occurrence of microsaccades by manipulating the size of an expected reward and the location of the cue that sets the expectations for future reward. We found an interaction between the size of the reward and the location of the cue. When monkeys fixated on a cue that signaled the size of future reward, the frequency of microsaccades was higher if the monkey expected a large vs. a small reward. When the cue was presented at a site in the visual field that was remote from the position of fixation, reward size had the opposite effect: the frequency of microsaccades was lower when the monkey was expecting a large reward. The strength of pursuit initiation also was affected by reward size and by the presence of microsaccades just before the onset of target motion. The gain of pursuit initiation increased with reward size and decreased when microsaccades occurred just before or after the onset of target motion. The effect of the reward size on pursuit initiation was much larger than any indirect effects reward might cause through modulation of the rate of microsaccades. We found only a weak relationship between microsaccade direction and the location of the exogenous cue relative to fixation position, even in experiments where the location of the cue indicated the direction of target motion. Our results indicate that the expectation of reward is a powerful modulator of the occurrence of microsaccades, perhaps through attentional mechanisms. PMID:26311180

  16. Strategies for Improving Power in School-Randomized Studies of Professional Development.

    PubMed

    Kelcey, Ben; Phelps, Geoffrey

    2013-12-01

    Group-randomized designs are well suited for studies of professional development because they can accommodate programs that are delivered to intact groups (e.g., schools), the collaborative nature of professional development, and extant teacher/school assignments. Though group designs may be theoretically favorable, prior evidence has suggested that they may be challenging to conduct in professional development studies because well-powered designs will typically require large sample sizes or expect large effect sizes. Using teacher knowledge outcomes in mathematics, we investigated when and the extent to which there is evidence that covariance adjustment on a pretest, teacher certification, or demographic covariates can reduce the sample size necessary to achieve reasonable power. Our analyses drew on multilevel models and outcomes in five different content areas for over 4,000 teachers and 2,000 schools. Using these estimates, we assessed the minimum detectable effect sizes for several school-randomized designs with and without covariance adjustment. The analyses suggested that teachers' knowledge is substantially clustered within schools in each of the five content areas and that covariance adjustment for a pretest or, to a lesser extent, teacher certification, has the potential to transform designs that are unreasonably large for professional development studies into viable studies. © The Author(s) 2014.

  17. Defining the urban area for cross national comparison of health indicators: the EURO-URHIS 2 boundary study.

    PubMed

    Higgerson, James; Birt, Christopher A; van Ameijden, Erik; Verma, Arpana

    2017-05-01

    Despite much research focusing on the impact of the city condition upon health, there still remains a lack of consensus over what constitutes an urban area (UA). This study was conducted to establish comparable boundaries for the UAs participating in EURO-URHIS 2, and to test whether the sample reflected the heterogeneity of urban living. Key UA contacts ( n = 28) completed a cross-sectional questionnaire, which included where available comparison between Urban Audit city and larger urban zone (LUZ) boundaries and public health administration areas (PHAAs). Additionally, broad health and demographic indicators were sought to test for heterogeneity of the EURO-URHIS 2 sample. Urban Audit city boundaries were found to be suitable for data collection in 100% ( n = 21) of UAs where Urban Audit data were available. The remainder ( n = 7) identified PHAA boundaries akin to the 'city' level. Heterogeneity was observed in the sample for population size and infant mortality rate. Heterogeneity could not be established for male and female life expectancy. This study was able to establish comparable boundaries for EURO-URHIS 2 data collection, with the 'city' area being selected for data collection. The homogeneity of life expectancy indicators was reflective of sub-regional similarities in life expectancy, whilst population estimates and rates of infant mortality indicated the presence of heterogeneity within the sample. Future work would trial these methods with a larger number of indicators and for a larger number of UAs. © The Author 2015. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.

  18. Estimation of regional pulmonary deposition and exposure for fumes from SMAW and GMAW mild and stainless steel consumables.

    PubMed

    Hewett, P

    1995-02-01

    The particle size distributions and bulk fume densities for mild steel and stainless steel welding fumes generated using two welding processes (shielded metal arc welding [SMAW] and gas metal arc welding [GMAW]) were used in mathematical models to estimate regional pulmonary deposition (the fraction of each fume expected to deposit in each region of the pulmonary system) and regional pulmonary exposure (the fraction of each fume expected to penetrate to each pulmonary region and would be collected by a particle size-selective sampling device). Total lung deposition for GMAW fumes was estimated at 60% greater than that of SMAW fumes. Considering both the potential for deposition and the fume specific surface areas, it is likely that for equal exposure concentrations GMAW fumes deliver nearly three times the particle surface area to the lungs as SMAW fumes. This leads to the hypothesis that exposure to GMAW fumes constitutes a greater pulmonary hazard than equal exposure to SMAW fumes. The implications of this hypothesis regarding the design of future health studies of welders is discussed.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Yongmin; Im, Myungshin; Kim, Jae-Woo, E-mail: yymx2@astro.snu.ac.kr, E-mail: mim@astro.snu.ac.kr

    Under the Λ cold dark matter (ΛCDM) cosmological models, massive galaxies are expected to be larger in denser environments through frequent hierarchical mergers with other galaxies. Yet, observational studies of low-redshift early-type galaxies have shown no such trend, standing as a puzzle to solve during the past decade. We analyzed 73,116 early-type galaxies at 0.1 ≤  z  < 0.15, adopting a robust nonparametric size measurement technique and extending the analysis to many massive galaxies. We find for the first time that local early-type galaxies heavier than 10{sup 11.2} M {sub ⊙} show a clear environmental dependence in mass–size relation, in such amore » way that galaxies are as much as 20%–40% larger in the densest environments than in underdense environments. Splitting the sample into the brightest cluster galaxies (BCGs) and non-BCGs does not affect the result. This result agrees with the ΛCDM cosmological simulations and suggests that mergers played a significant role in the growth of massive galaxies in dense environments as expected in theory.« less

  20. Copper Decoration of Carbon Nanotubes and High Resolution Electron Microscopy

    NASA Astrophysics Data System (ADS)

    Probst, Camille

    A new process of decorating carbon nanotubes with copper was developed for the fabrication of nanocomposite aluminum-nanotubes. The process consists of three stages: oxidation, activation and electroless copper plating on the nanotubes. The oxidation step was required to create chemical function on the nanotubes, essential for the activation step. Then, catalytic nanoparticles of tin-palladium were deposited on the tubes. Finally, during the electroless copper plating, copper particles with a size between 20 and 60 nm were uniformly deposited on the nanotubes surface. The reproducibility of the process was shown by using another type of carbon nanotube. The fabrication of nanocomposites aluminum-nanotubes was tested by aluminum vacuum infiltration. Although the infiltration of carbon nanotubes did not produce the expected results, an interesting electron microscopy sample was discovered during the process development: the activated carbon nanotubes. Secondly, scanning transmitted electron microscopy (STEM) imaging in SEM was analysed. The images were obtained with a new detector on the field emission scanning electron microscope (Hitachi S-4700). Various parameters were analysed with the use of two different samples: the activated carbon nanotubes (previously obtained) and gold-palladium nanodeposits. Influences of working distance, accelerating voltage or sample used on the spatial resolution of images obtained with SMART (Scanning Microscope Assessment and Resolution Testing) were analysed. An optimum working distance for the best spatial resolution related to the sample analysed was found for the imaging in STEM mode. Finally, relation between probe size and spatial resolution of backscattered electrons (BSE) images was studied. An image synthesis method was developed to generate the BSE images from backscattered electrons coefficients obtained with CASINO software. Spatial resolution of images was determined using SMART. The analysis shown that using a probe size smaller than the size of the observed object (sample features) does not improve the spatial resolution. In addition, the effects of the accelerating voltage, the current intensity and the sample geometry and composition were analysed.

  1. Mothers' developmental expectations for young children in the Philippines.

    PubMed

    Williams, P D; Williams, A R; Lopez, M; Tayko, N P

    2000-08-01

    The purpose of the study was to investigate the developmental timetables of rural and urban mothers in the Central Visayas, Philippines. Examined were mothers' expectations for children's development (physical/perceptual-motor, cognitive, and psychosocial) and mothers' child rearing practices, as influenced by four selected variables (child's gender and ordinal position, mother's education, and rural or urban residence). The total sample size was 303 mothers [153 of rural residence and 150 urban] in two provinces. Mothers responded to a structured questionnaire which was read to them aloud by trained interviewers who then recorded the responses. All the mothers had children between 4-6 years old; equal numbers of male and female children were included, and the entire economic and educational ranges were represented. MANOVA results showed significant main effects of all four variables on maternal expectations of child development and on maternal child rearing practices. Also, significant interaction effects of residential location and maternal education on mothers' expectations of children's physical/perceptual motor development were found (F[2, 256]=3.05, P=0.05). Significant interaction effects also were found of maternal education and the child's ordinal position (F[4,256]=2. 95, P=0.02) on maternal expectations of children's psychosocial development.

  2. The relation between statistical power and inference in fMRI

    PubMed Central

    Wager, Tor D.; Yarkoni, Tal

    2017-01-01

    Statistically underpowered studies can result in experimental failure even when all other experimental considerations have been addressed impeccably. In fMRI the combination of a large number of dependent variables, a relatively small number of observations (subjects), and a need to correct for multiple comparisons can decrease statistical power dramatically. This problem has been clearly addressed yet remains controversial—especially in regards to the expected effect sizes in fMRI, and especially for between-subjects effects such as group comparisons and brain-behavior correlations. We aimed to clarify the power problem by considering and contrasting two simulated scenarios of such possible brain-behavior correlations: weak diffuse effects and strong localized effects. Sampling from these scenarios shows that, particularly in the weak diffuse scenario, common sample sizes (n = 20–30) display extremely low statistical power, poorly represent the actual effects in the full sample, and show large variation on subsequent replications. Empirical data from the Human Connectome Project resembles the weak diffuse scenario much more than the localized strong scenario, which underscores the extent of the power problem for many studies. Possible solutions to the power problem include increasing the sample size, using less stringent thresholds, or focusing on a region-of-interest. However, these approaches are not always feasible and some have major drawbacks. The most prominent solutions that may help address the power problem include model-based (multivariate) prediction methods and meta-analyses with related synthesis-oriented approaches. PMID:29155843

  3. Discerning some Tylenol brands using attenuated total reflection Fourier transform infrared data and multivariate analysis techniques.

    PubMed

    Msimanga, Huggins Z; Ollis, Robert J

    2010-06-01

    Principal component analysis (PCA) and partial least squares discriminant analysis (PLS-DA) were used to classify acetaminophen-containing medicines using their attenuated total reflection Fourier transform infrared (ATR-FT-IR) spectra. Four formulations of Tylenol (Arthritis Pain Relief, Extra Strength Pain Relief, 8 Hour Pain Relief, and Extra Strength Pain Relief Rapid Release) along with 98% pure acetaminophen were selected for this study because of the similarity of their spectral features, with correlation coefficients ranging from 0.9857 to 0.9988. Before acquiring spectra for the predictor matrix, the effects on spectral precision with respect to sample particle size (determined by sieve size opening), force gauge of the ATR accessory, sample reloading, and between-tablet variation were examined. Spectra were baseline corrected and normalized to unity before multivariate analysis. Analysis of variance (ANOVA) was used to study spectral precision. The large particles (35 mesh) showed large variance between spectra, while fine particles (120 mesh) indicated good spectral precision based on the F-test. Force gauge setting did not significantly affect precision. Sample reloading using the fine particle size and a constant force gauge setting of 50 units also did not compromise precision. Based on these observations, data acquisition for the predictor matrix was carried out with the fine particles (sieve size opening of 120 mesh) at a constant force gauge setting of 50 units. After removing outliers, PCA successfully classified the five samples in the first and second components, accounting for 45.0% and 24.5% of the variances, respectively. The four-component PLS-DA model (R(2)=0.925 and Q(2)=0.906) gave good test spectra predictions with an overall average of 0.961 +/- 7.1% RSD versus the expected 1.0 prediction for the 20 test spectra used.

  4. Population-Based Resequencing of Experimentally Evolved Populations Reveals the Genetic Basis of Body Size Variation in Drosophila melanogaster

    PubMed Central

    Turner, Thomas L.; Stewart, Andrew D.; Fields, Andrew T.; Rice, William R.; Tarone, Aaron M.

    2011-01-01

    Body size is a classic quantitative trait with evolutionarily significant variation within many species. Locating the alleles responsible for this variation would help understand the maintenance of variation in body size in particular, as well as quantitative traits in general. However, successful genome-wide association of genotype and phenotype may require very large sample sizes if alleles have low population frequencies or modest effects. As a complementary approach, we propose that population-based resequencing of experimentally evolved populations allows for considerable power to map functional variation. Here, we use this technique to investigate the genetic basis of natural variation in body size in Drosophila melanogaster. Significant differentiation of hundreds of loci in replicate selection populations supports the hypothesis that the genetic basis of body size variation is very polygenic in D. melanogaster. Significantly differentiated variants are limited to single genes at some loci, allowing precise hypotheses to be formed regarding causal polymorphisms, while other significant regions are large and contain many genes. By using significantly associated polymorphisms as a priori candidates in follow-up studies, these data are expected to provide considerable power to determine the genetic basis of natural variation in body size. PMID:21437274

  5. Brief communication: Hair density and body mass in mammals and the evolution of human hairlessness.

    PubMed

    Sandel, Aaron A

    2013-09-01

    Humans are unusual among mammals in appearing hairless. Several hypotheses propose explanations for this phenotype, but few data are available to test these hypotheses. To elucidate the evolutionary history of human "hairlessness," a comparative approach is needed. One previous study on primate hair density concluded that great apes have systematically less dense hair than smaller primates. While there is a negative correlation between body size and hair density, it remains unclear whether great apes have less dense hair than is expected for their body size. To revisit the scaling relationship between hair density and body size in mammals, I compiled data from the literature on 23 primates and 29 nonprimate mammals and conducted Phylogenetic Generalized Least Squares regressions. Among anthropoids, there is a significant negative correlation between hair density and body mass. Chimpanzees display the largest residuals, exhibiting less dense hair than is expected for their body size. There is a negative correlation between hair density and body mass among the broader mammalian sample, although the functional significance of this scaling relationship remains to be tested. Results indicate that all primates, and chimpanzees in particular, are relatively hairless compared to other mammals. This suggests that there may have been selective pressures acting on the ancestor of humans and chimpanzees that led to an initial reduction in hair density. To further understand the evolution of human hairlessness, a systematic study of hair density and physiology in a wide range of species is necessary. Copyright © 2013 Wiley Periodicals, Inc.

  6. Evaluation of residual uranium contamination in the dirt floor of an abandoned metal rolling mill.

    PubMed

    Glassford, Eric; Spitz, Henry; Lobaugh, Megan; Spitler, Grant; Succop, Paul; Rice, Carol

    2013-02-01

    A single, large, bulk sample of uranium-contaminated material from the dirt floor of an abandoned metal rolling mill was separated into different types and sizes of aliquots to simulate samples that would be collected during site remediation. The facility rolled approximately 11,000 tons of hot-forged ingots of uranium metal approximately 60 y ago, and it has not been used since that time. Thirty small mass (≈ 0.7 g) and 15 large mass (≈ 70 g) samples were prepared from the heterogeneously contaminated bulk material to determine how measurements of the uranium contamination vary with sample size. Aliquots of bulk material were also resuspended in an exposure chamber to produce six samples of respirable particles that were obtained using a cascade impactor. Samples of removable surface contamination were collected by wiping 100 cm of the interior surfaces of the exposure chamber with 47-mm-diameter fiber filters. Uranium contamination in each of the samples was measured directly using high-resolution gamma ray spectrometry. As expected, results for isotopic uranium (i.e., U and U) measured with the large-mass and small-mass samples are significantly different (p < 0.001), and the coefficient of variation (COV) for the small-mass samples was greater than for the large-mass samples. The uranium isotopic concentrations measured in the air and on the wipe samples were not significantly different and were also not significantly different (p > 0.05) from results for the large- or small-mass samples. Large-mass samples are more reliable for characterizing heterogeneously distributed radiological contamination than small-mass samples since they exhibit the least variation compared to the mean. Thus, samples should be sufficiently large in mass to insure that the results are truly representative of the heterogeneously distributed uranium contamination present at the facility. Monitoring exposure of workers and the public as a result of uranium contamination resuspended during site remediation should be evaluated using samples of sufficient size and type to accommodate the heterogeneous distribution of uranium in the bulk material.

  7. Microfluidic sorting of protein nanocrystals by size for X-ray free-electron laser diffraction

    PubMed Central

    Abdallah, Bahige G.; Zatsepin, Nadia A.; Roy-Chowdhury, Shatabdi; Coe, Jesse; Conrad, Chelsie E.; Dörner, Katerina; Sierra, Raymond G.; Stevenson, Hilary P.; Camacho-Alanis, Fernanda; Grant, Thomas D.; Nelson, Garrett; James, Daniel; Calero, Guillermo; Wachter, Rebekka M.; Spence, John C. H.; Weierstall, Uwe; Fromme, Petra; Ros, Alexandra

    2015-01-01

    The advent and application of the X-ray free-electron laser (XFEL) has uncovered the structures of proteins that could not previously be solved using traditional crystallography. While this new technology is powerful, optimization of the process is still needed to improve data quality and analysis efficiency. One area is sample heterogeneity, where variations in crystal size (among other factors) lead to the requirement of large data sets (and thus 10–100 mg of protein) for determining accurate structure factors. To decrease sample dispersity, we developed a high-throughput microfluidic sorter operating on the principle of dielectrophoresis, whereby polydisperse particles can be transported into various fluid streams for size fractionation. Using this microsorter, we isolated several milliliters of photosystem I nanocrystal fractions ranging from 200 to 600 nm in size as characterized by dynamic light scattering, nanoparticle tracking, and electron microscopy. Sorted nanocrystals were delivered in a liquid jet via the gas dynamic virtual nozzle into the path of the XFEL at the Linac Coherent Light Source. We obtained diffraction to ∼4 Å resolution, indicating that the small crystals were not damaged by the sorting process. We also observed the shape transforms of photosystem I nanocrystals, demonstrating that our device can optimize data collection for the shape transform-based phasing method. Using simulations, we show that narrow crystal size distributions can significantly improve merged data quality in serial crystallography. From this proof-of-concept work, we expect that the automated size-sorting of protein crystals will become an important step for sample production by reducing the amount of protein needed for a high quality final structure and the development of novel phasing methods that exploit inter-Bragg reflection intensities or use variations in beam intensity for radiation damage-induced phasing. This method will also permit an analysis of the dependence of crystal quality on crystal size. PMID:26798818

  8. Microfluidic sorting of protein nanocrystals by size for X-ray free-electron laser diffraction

    DOE PAGES

    Abdallah, Bahige G.; Zatsepin, Nadia A.; Roy-Chowdhury, Shatabdi; ...

    2015-08-19

    We report that the advent and application of the X-ray free-electron laser (XFEL) has uncovered the structures of proteins that could not previously be solved using traditional crystallography. While this new technology is powerful, optimization of the process is still needed to improve data quality and analysis efficiency. One area is sample heterogeneity, where variations in crystal size (among other factors) lead to the requirement of large data sets (and thus 10–100 mg of protein) for determining accurate structure factors. To decrease sample dispersity, we developed a high-throughput microfluidic sorter operating on the principle of dielectrophoresis, whereby polydisperse particles canmore » be transported into various fluid streams for size fractionation. Using this microsorter, we isolated several milliliters of photosystem I nanocrystal fractions ranging from 200 to 600 nm in size as characterized by dynamic light scattering, nanoparticle tracking, and electron microscopy. Sorted nanocrystals were delivered in a liquid jet via the gas dynamic virtual nozzle into the path of the XFEL at the Linac Coherent Light Source. We obtained diffraction to ~4 Å resolution, indicating that the small crystals were not damaged by the sorting process. We also observed the shape transforms of photosystem I nanocrystals, demonstrating that our device can optimize data collection for the shape transform-based phasing method. Using simulations, we show that narrow crystal size distributions can significantly improve merged data quality in serial crystallography. From this proof-of-concept work, we expect that the automated size-sorting of protein crystals will become an important step for sample production by reducing the amount of protein needed for a high quality final structure and the development of novel phasing methods that exploit inter-Bragg reflection intensities or use variations in beam intensity for radiation damage-induced phasing. Ultimately, this method will also permit an analysis of the dependence of crystal quality on crystal size.« less

  9. NectarCAM, a camera for the medium sized telescopes of the Cherenkov telescope array

    NASA Astrophysics Data System (ADS)

    Glicenstein, J.-F.; Shayduk, M.

    2017-01-01

    NectarCAM is a camera proposed for the medium-sized telescopes of the Cherenkov Telescope Array (CTA) which covers the core energy range of 100 GeV to 30 TeV. It has a modular design and is based on the NECTAr chip, at the heart of which is a GHz sampling Switched Capacitor Array and 12-bit Analog to Digital converter. The camera will be equipped with 265 7-photomultiplier modules, covering a field of view of 8 degrees. Each module includes photomultiplier bases, high voltage supply, pre-amplifier, trigger, readout and Ethernet transceiver. The recorded events last between a few nanoseconds and tens of nanoseconds. The expected performance of the camera are discussed. Prototypes of NectarCAM components have been built to validate the design. Preliminary results of a 19-module mini-camera are presented, as well as future plans for building and testing a full size camera.

  10. Design of an occulter testbed at flight Fresnel numbers

    NASA Astrophysics Data System (ADS)

    Sirbu, Dan; Kasdin, N. Jeremy; Kim, Yunjong; Vanderbei, Robert J.

    2015-01-01

    An external occulter is a spacecraft flown along the line-of-sight of a space telescope to suppress starlight and enable high-contrast direct imaging of exoplanets. Laboratory verification of occulter designs is necessary to validate the optical models used to design and predict occulter performance. At Princeton, we are designing and building a testbed that allows verification of scaled occulter designs whose suppressed shadow is mathematically identical to that of space occulters. Here, we present a sample design operating at a flight Fresnel number and is thus representative of a realistic space mission. We present calculations of experimental limits arising from the finite size and propagation distance available in the testbed, limitations due to manufacturing feature size, and non-ideal input beam. We demonstrate how the testbed is designed to be feature-size limited, and provide an estimation of the expected performance.

  11. Closure and ratio correlation analysis of lunar chemical and grain size data

    NASA Technical Reports Server (NTRS)

    Butler, J. C.

    1976-01-01

    Major element and major element plus trace element analyses were selected from the lunar data base for Apollo 11, 12 and 15 basalt and regolith samples. Summary statistics for each of the six data sets were compiled, and the effects of closure on the Pearson product moment correlation coefficient were investigated using the Chayes and Kruskal approximation procedure. In general, there are two types of closure effects evident in these data sets: negative correlations of intermediate size which are solely the result of closure, and correlations of small absolute value which depart significantly from their expected closure correlations which are of intermediate size. It is shown that a positive closure correlation will arise only when the product of the coefficients of variation is very small (less than 0.01 for most data sets) and, in general, trace elements in the lunar data sets exhibit relatively large coefficients of variation.

  12. Experimental evidence for stochastic switching of supercooled phases in NdNiO3 nanostructures

    NASA Astrophysics Data System (ADS)

    Kumar, Devendra; Rajeev, K. P.; Alonso, J. A.

    2018-03-01

    A first-order phase transition is a dynamic phenomenon. In a multi-domain system, the presence of multiple domains of coexisting phases averages out the dynamical effects, making it nearly impossible to predict the exact nature of phase transition dynamics. Here, we report the metal-insulator transition in samples of sub-micrometer size NdNiO3 where the effect of averaging is minimized by restricting the number of domains under study. We observe the presence of supercooled metallic phases with supercooling of 40 K or more. The transformation from the supercooled metallic to the insulating state is a stochastic process that happens at different temperatures and times in different experimental runs. The experimental results are understood without incorporating material specific properties, suggesting that the behavior is of universal nature. The size of the sample needed to observe individual switching of supercooled domains, the degree of supercooling, and the time-temperature window of switching are expected to depend on the parameters such as quenched disorder, strain, and magnetic field.

  13. Dielectric Characteristics and Microwave Absorption of Graphene Composite Materials

    PubMed Central

    Rubrice, Kevin; Castel, Xavier; Himdi, Mohamed; Parneix, Patrick

    2016-01-01

    Nowadays, many types of materials are elaborated for microwave absorption applications. Carbon-based nanoparticles belong to these types of materials. Among these, graphene presents some distinctive features for electromagnetic radiation absorption and thus microwave isolation applications. In this paper, the dielectric characteristics and microwave absorption properties of epoxy resin loaded with graphene particles are presented from 2 GHz to 18 GHz. The influence of various parameters such as particle size (3 µm, 6–8 µm, and 15 µm) and weight ratio (from 5% to 25%) are presented, studied, and discussed. The sample loaded with the smallest graphene size (3 µm) and the highest weight ratio (25%) exhibits high loss tangent (tanδ = 0.36) and a middle dielectric constant ε′ = 12–14 in the 8–10 GHz frequency range. As expected, this sample also provides the highest absorption level: from 5 dB/cm at 4 GHz to 16 dB/cm at 18 GHz. PMID:28773948

  14. A Bayesian-frequentist two-stage single-arm phase II clinical trial design.

    PubMed

    Dong, Gaohong; Shih, Weichung Joe; Moore, Dirk; Quan, Hui; Marcella, Stephen

    2012-08-30

    It is well-known that both frequentist and Bayesian clinical trial designs have their own advantages and disadvantages. To have better properties inherited from these two types of designs, we developed a Bayesian-frequentist two-stage single-arm phase II clinical trial design. This design allows both early acceptance and rejection of the null hypothesis ( H(0) ). The measures (for example probability of trial early termination, expected sample size, etc.) of the design properties under both frequentist and Bayesian settings are derived. Moreover, under the Bayesian setting, the upper and lower boundaries are determined with predictive probability of trial success outcome. Given a beta prior and a sample size for stage I, based on the marginal distribution of the responses at stage I, we derived Bayesian Type I and Type II error rates. By controlling both frequentist and Bayesian error rates, the Bayesian-frequentist two-stage design has special features compared with other two-stage designs. Copyright © 2012 John Wiley & Sons, Ltd.

  15. Type I error probability spending for post-market drug and vaccine safety surveillance with binomial data.

    PubMed

    Silva, Ivair R

    2018-01-15

    Type I error probability spending functions are commonly used for designing sequential analysis of binomial data in clinical trials, but it is also quickly emerging for near-continuous sequential analysis of post-market drug and vaccine safety surveillance. It is well known that, for clinical trials, when the null hypothesis is not rejected, it is still important to minimize the sample size. Unlike in post-market drug and vaccine safety surveillance, that is not important. In post-market safety surveillance, specially when the surveillance involves identification of potential signals, the meaningful statistical performance measure to be minimized is the expected sample size when the null hypothesis is rejected. The present paper shows that, instead of the convex Type I error spending shape conventionally used in clinical trials, a concave shape is more indicated for post-market drug and vaccine safety surveillance. This is shown for both, continuous and group sequential analysis. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Characterizing temporal changes of agricultural particulate matter number concentrations

    NASA Astrophysics Data System (ADS)

    Docekal, G. P.; Mahmood, R.; Larkin, G. P.; Silva, P. J.

    2017-12-01

    It is widely accepted among literature that particulate matter (PM) are of detriment to human health and the environment as a whole. These effects can vary depending on the particle size. This study examines PM size distributions and number concentrations at a poultry house. Despite much literature on PM concentrations at agricultural facilities, few studies have looked at the size distribution of particles at such facilities from the nucleation up through the coarse mode. Two optical particle counters (OPCs) were placed, one inside of a chicken house, and one on the outside of an exhaust fan to determine particle size distributions. In addition, a scanning mobility particle sizer (SMPS) and aerodynamic particle sizer (APS) sampled poultry house particles to give sizing information from a full size range of 10 nm - 20 mm. The data collected show several different types of events where observed size distributions changed. While some of these are due to expected dust generation events producing coarse mode particles, others suggest particle nucleation and accumulation events at the smaller size ranges that also occurred. The data suggest that agricultural facilities have an impact one the presence of PM in the environment beyond just generation of coarse mode dust. Data for different types of size distribution changes observed will be discussed.

  17. Which is the Ideal Breast Size?: Some Social Clues for Plastic Surgeons.

    PubMed

    Raposio, Edoardo; Belgrano, Valerio; Santi, PierLuigi; Chiorri, Carlo

    2016-03-01

    To provide plastic surgeons with more detailed information as to factors affecting the perception of female attractiveness, the present study was aimed to investigate whether the interaction effect of breast and body size on ratings of female attractiveness is moderated by sociodemographic variables and whether ratings of shapeliness diverge from those of attractiveness.A community sample of 958 Italian participants rated the attractiveness and the shapeliness of 15 stimuli (5 breast sizes × 3 body sizes) in which frontal, 3/4, and profile views of the head and torso of a faceless woman were jointly shown.Bigger breast sizes obtained the highest attractiveness ratings, but the breast-by-body size interaction was also significant. Evidence was found of a moderator role of sex, marital status, and age. When the effects of breast and body size and their interaction had been ruled out, sex differences were at best very slight and limited to very specific combinations of breast and body sizes. Ratings of attractiveness and shapeliness were highly correlated and did not significantly differ.Results suggest that to address women's psychological needs, concerns, and expectations about their appearance, plastic surgeons should not simply focus on breast size but should carefully consider the 'big picture': the body in its entirety.

  18. A Kepler Mission, A Search for Habitable Planets: Concept, Capabilities and Strengths

    NASA Technical Reports Server (NTRS)

    Koch, David; Borucki, William; Lissauer, Jack; Dunham, Edward; Jenkins, Jon; DeVincenzi, D. (Technical Monitor)

    1998-01-01

    The detection of extrasolar terrestrial planets orbiting main-sequence stars is of great interest and importance. Current ground-based methods are only capable of detecting objects about the size or mass of Jupiter or larger. The technological challenges of direct imaging of Earth-size planets from space are expected to be resolved over the next twenty years. Spacebased photometry of planetary transits is currently the only viable method for detection of terrestrial planets (30-600 times less massive than Jupiter). The method searches the extended solar neighborhood, providing a statistically large sample and the detailed characteristics of each individual case. A robust concept has been developed and proposed as a Discovery-class mission. The concept, its capabilities and strengths are presented.

  19. Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature

    PubMed Central

    Szucs, Denes; Ioannidis, John P. A.

    2017-01-01

    We have empirically assessed the distribution of published effect sizes and estimated power by analyzing 26,841 statistical records from 3,801 cognitive neuroscience and psychology papers published recently. The reported median effect size was D = 0.93 (interquartile range: 0.64–1.46) for nominally statistically significant results and D = 0.24 (0.11–0.42) for nonsignificant results. Median power to detect small, medium, and large effects was 0.12, 0.44, and 0.73, reflecting no improvement through the past half-century. This is so because sample sizes have remained small. Assuming similar true effect sizes in both disciplines, power was lower in cognitive neuroscience than in psychology. Journal impact factors negatively correlated with power. Assuming a realistic range of prior probabilities for null hypotheses, false report probability is likely to exceed 50% for the whole literature. In light of our findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience. PMID:28253258

  20. Retention of Ancestral Genetic Variation Across Life-Stages of an Endangered, Long-Lived Iteroparous Fish.

    PubMed

    Carson, Evan W; Turner, Thomas F; Saltzgiver, Melody J; Adams, Deborah; Kesner, Brian R; Marsh, Paul C; Pilger, Tyler J; Dowling, Thomas E

    2016-11-01

    As with many endangered, long-lived iteroparous fishes, survival of razorback sucker depends on a management strategy that circumvents recruitment failure that results from predation by non-native fishes. In Lake Mohave, AZ-NV, management of razorback sucker centers on capture of larvae spawned in the lake, rearing them in off-channel habitats, and subsequent release ("repatriation") to the lake when adults are sufficiently large to resist predation. The effects of this strategy on genetic diversity, however, remained uncertain. After correction for differences in sample size among groups, metrics of mitochondrial DNA (mtDNA; number of haplotypes, N H , and haplotype diversity, H D ) and microsatellite (number of alleles, N A , and expected heterozygosity, H E ) diversity did not differ significantly between annual samples of repatriated adults and larval year-classes or among pooled samples of repatriated adults, larvae, and wild fish. These findings indicate that the current management program thus far maintained historical genetic variation of razorback sucker in the lake. Because effective population size, N e , is closely tied to the small census population size (N c = ~1500-3000) of razorback sucker in Lake Mohave, this population will remain at risk from genetic, as well as demographic risk of extinction unless N c is increased substantially. © The American Genetic Association 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. The Long-Term Oxygen Treatment Trial for Chronic Obstructive Pulmonary Disease: Rationale, Design, and Lessons Learned.

    PubMed

    Yusen, Roger D; Criner, Gerard J; Sternberg, Alice L; Au, David H; Fuhlbrigge, Anne L; Albert, Richard K; Casaburi, Richard; Stoller, James K; Harrington, Kathleen F; Cooper, J Allen D; Diaz, Philip; Gay, Steven; Kanner, Richard; MacIntyre, Neil; Martinez, Fernando J; Piantadosi, Steven; Sciurba, Frank; Shade, David; Stibolt, Thomas; Tonascia, James; Wise, Robert; Bailey, William C

    2018-01-01

    The Long-Term Oxygen Treatment Trial demonstrated that long-term supplemental oxygen did not reduce time to hospital admission or death for patients who have stable chronic obstructive pulmonary disease and resting and/or exercise-induced moderate oxyhemoglobin desaturation, nor did it provide benefit for any other outcome measured in the trial. Nine months after initiation of patient screening, after randomization of 34 patients to treatment, a trial design amendment broadened the eligible population, expanded the primary outcome, and reduced the goal sample size. Within a few years, the protocol underwent minor modifications, and a second trial design amendment lowered the required sample size because of lower than expected treatment group crossover rates. After 5.5 years of recruitment, the trial met its amended sample size goal, and 1 year later, it achieved its follow-up goal. The process of publishing the trial results brought renewed scrutiny of the study design and the amendments. This article expands on the previously published design and methods information, provides the rationale for the amendments, and gives insight into the investigators' decisions about trial conduct. The story of the Long-Term Oxygen Treatment Trial may assist investigators in future trials, especially those that seek to assess the efficacy and safety of long-term oxygen therapy. Clinical trial registered with clinicaltrials.gov (NCT00692198).

  2. Is a data set distributed as a power law? A test, with application to gamma-ray burst brightnesses

    NASA Technical Reports Server (NTRS)

    Wijers, Ralph A. M. J.; Lubin, Lori M.

    1994-01-01

    We present a method to determine whether an observed sample of data is drawn from a parent distribution that is pure power law. The method starts from a class of statistics which have zero expectation value under the null hypothesis, H(sub 0), that the distribution is a pure power law: F(x) varies as x(exp -alpha). We study one simple member of the class, named the `bending statistic' B, in detail. It is most effective for detection a type of deviation from a power law where the power-law slope varies slowly and monotonically as a function of x. Our estimator of B has a distribution under H(sub 0) that depends only on the size of the sample, not on the parameters of the parent population, and is approximated well by a normal distribution even for modest sample sizes. The bending statistic can therefore be used to test a set of numbers is drawn from any power-law parent population. Since many measurable quantities in astrophysics have distriibutions that are approximately power laws, and since deviations from the ideal power law often provide interesting information about the object of study (e.g., a `bend' or `break' in a luminosity function, a line in an X- or gamma-ray spectrum), we believe that a test of this type will be useful in many different contexts. In the present paper, we apply our test to various subsamples of gamma-ray burst brightness from the first-year Burst and Transient Source Experiment (BATSE) catalog and show that we can only marginally detect the expected steepening of the log (N (greater than C(sub max))) - log (C(sub max)) distribution.

  3. Open sepulchers and closed boundaries? Biodistance analysis of cemetery structure and postmarital residence in the late prehispanic Andes.

    PubMed

    Velasco, Matthew C

    2018-05-16

    In the Late Intermediate Period Andes (AD 1100-1450), the proliferation of above-ground sepulchers reconfigured social boundaries within and between communities engaged in protracted conflict. However, the biosocial dimensions of these mortuary practices, and their implications for conflict and alliance formation, remain unexplored. This study examines patterns of phenotypic variation to: (1) evaluate if open sepulchers were organized on the basis of biological relatedness, and (2) explore if sex-specific phenotypic variability conforms to models of postmarital residence. Cranial nonmetric traits were recorded in five skeletal samples from two cemeteries in the Colca Valley, Peru. Biological distances between burial groups were calculated using the Mean Measure of Divergence (MMD) statistic. Postmarital residence was explored by calculating and bootstrapping the ratio of male-to-female mean pairwise differences (MPD) at the within-group level. The MMD analysis yields greater than expected between-group distances for burial groups with a minimum sample size of 20 individuals. In contrast, a prevailing pattern of sex-specific, within-group phenotypic variability is not apparent from the analysis of MPD. The use of 12 or 24 dichotomous traits produces similar results. Greater than expected biological distances suggest that above-ground mortuary practices reinforced biosocial boundaries between corporate household groups. Intracemetery heterogeneity persisted even as cranial vault modification, a correlate of social identity, became more homogenous, revealing how corporate group organization was negotiated at multiple scales. Sex-specific variation does not conform to traditional migration models. If migration occurred, it did not have a homogenizing effect on phenotypic variation. These results should be viewed with caution in light of the smaller sample sizes of sex-specific groupings. © 2018 Wiley Periodicals, Inc.

  4. Invited Review Small is beautiful: The analysis of nanogram-sized astromaterials

    NASA Astrophysics Data System (ADS)

    Zolensky, M. E.; Pieters, C.; Clark, B.; Papike, J. J.

    2000-01-01

    The capability of modern methods to characterize ultra-small samples is well established from analysis of interplanetary dust particles (IDPs), interstellar grains recovered from meteorites, and other materials requiring ultra-sensitive analytical capabilities. Powerful analytical techniques are available that require, under favorable circumstances, single particles of only a few nanograms for entire suites of fairly comprehensive characterizations. A returned sample of >1,000 particles with total mass of just one microgram permits comprehensive quantitative geochemical measurements that are impractical to carry out in situ by flight instruments. The main goal of this paper is to describe the state-of-the-art in microanalysis of astromaterials. Given that we can analyze fantastically small quantities of asteroids and comets, etc., we have to ask ourselves how representative are microscopic samples of bodies that measure a few to many km across? With the Galileo flybys of Gaspra and Ida, it is now recognized that even very small airless bodies have indeed developed a particulate regolith. Acquiring a sample of the bulk regolith, a simple sampling strategy, provides two critical pieces of information about the body. Regolith samples are excellent bulk samples since they normally contain all the key components of the local environment, albeit in particulate form. Furthermore, since this fine fraction dominates remote measurements, regolith samples also provide information about surface alteration processes and are a key link to remote sensing of other bodies. Studies indicate that a statistically significant number of nanogram-sized particles should be able to characterize the regolith of a primitive asteroid, although the presence of larger components within even primitive meteorites (e.g.. Murchison), e.g. chondrules, CAI, large crystal fragments, etc., points out the limitations of using data obtained from nanogram-sized samples to characterize entire primitive asteroids. However, most important asteroidal geological processes have left their mark on the matrix, since this is the finest-grained portion and therefore most sensitive to chemical and physical changes. Thus, the following information can be learned from this fine grain size fraction alone: (1) mineral paragenesis; (2) regolith processes, (3) bulk composition; (4) conditions of thermal and aqueous alteration (if any); (5) relationships to planets, comets, meteorites (via isotopic analyses, including oxygen; (6) abundance of water and hydrated material; (7) abundance of organics; (8) history of volatile mobility, (9) presence and origin of presolar and/or interstellar material. Most of this information can even be obtained from dust samples from bodies for which nanogram-sized samples are not truly representative. Future advances in sensitivity and accuracy of laboratory analytical techniques can be expected to enhance the science value of nano- to microgram sized samples even further. This highlights a key advantage of sample returns - that the most advanced analysis techniques can always be applied in the laboratory, and that well-preserved samples are available for future investigations.

  5. Time-integrated sampling of fluvial suspended sediment: a simple methodology for small catchments

    NASA Astrophysics Data System (ADS)

    Phillips, J. M.; Russell, M. A.; Walling, D. E.

    2000-10-01

    Fine-grained (<62·5 µm) suspended sediment transport is a key component of the geochemical flux in most fluvial systems. The highly episodic nature of suspended sediment transport imposes a significant constraint on the design of sampling strategies aimed at characterizing the biogeochemical properties of such sediment. A simple sediment sampler, utilizing ambient flow to induce sedimentation by settling, is described. The sampler can be deployed unattended in small streams to collect time-integrated suspended sediment samples. In laboratory tests involving chemically dispersed sediment, the sampler collected a maximum of 71% of the input sample mass. However, under natural conditions, the existence of composite particles or flocs can be expected to increase significantly the trapping efficiency. Field trials confirmed that the particle size composition and total carbon content of the sediment collected by the sampler were representative statistically of the ambient suspended sediment.

  6. Job strain and shift work influences on biomarkers and subclinical heart disease indicators: a pilot study.

    PubMed

    Wong, Imelda S; Ostry, Aleck S; Demers, Paul A; Davies, Hugh W

    2012-01-01

    This pilot study is one of the first to examine the impact of job strain and shift work on both the autonomic nervous system (ANS) and the hypothalamic-pituitary-adrenal (HPA) axis using two salivary stress biomarkers and two subclinical heart disease indicators. This study also tested the feasibility of a rigorous biological sampling protocol in a busy workplace setting. Paramedics (n = 21) self-collected five salivary samples over 1 rest and 2 workdays. Samples were analyzed for α-amylase and cortisol diurnal slopes and daily production. Heart rate variability (HRV) was logged over 2 workdays with the Polar RS800 Heart Rate monitors. Endothelial functioning was measured using fingertip peripheral arterial tonometry. Job strain was ascertained using a paramedic-specific survey. The effects of job strain and shift work were examined by comparing paramedic types (dispatchers vs. ambulance attendants) and shift types (daytime vs. rotating day/night). Over 90% of all expected samples were collected and fell within expected normal ranges. Workday samples were significantly different from rest day samples. Dispatchers reported higher job strain than ambulance paramedics and exhibited reduced daily alpha-amylase production, elevated daily cortisol production, and reduced endothelial function. In comparison with daytime-only workers, rotating shift workers reported higher job strain, exhibited flatter α-amylase and cortisol diurnal slopes, reduced daily α-amylase production, elevated daily cortisol production, and reduced HRV and endothelial functioning. Despite non-statistically significant differences between group comparisons, the consistency of the overall trend in subjective and objective measures suggests that exposure to work stressors may lead to dysregulation in neuroendocrine activity and, over the long-term, to early signs of heart disease. Results suggest that further study is warranted in this population. Power calculations based on effect sizes in the shift type comparison suggest a study size of n = 250 may result in significant differences at p = 0.05. High compliance among paramedics to complete the intensive protocol suggests this study will be feasible in a larger population.

  7. Identifying across‐system sources of variation in a generalist freshwater fish: Correlates of total and size‐specific abundance of yellow perch

    USGS Publications Warehouse

    Carey, Michael P.; Mather, M. E.

    2009-01-01

    Variation in fish abundance across systems presents a challenge to our understanding of fish populations because it limits our ability to predict and transfer basic ecological principles to applied problems. Yellow perch (Perca flavescens) is an ideal species for exploring environmental and biotic correlates across system because it is widely distributed and physiologically tolerant. In 16 small, adjacent systems that span a wide range of environmental and biotic conditions, yellow perch were sampled with a standard suite of gear. Water quality, morphometry, vegetation, invertebrates and fish communities were concurrently measured. Multimodel inference was used to prioritise regressors for the entire yellow perch sample and three size groups (35-80, 81-180, ≥181 mm TL). Across systems, pH and fish richness were identified as the key drivers of yellow perch abundance. At very low pH (<4.0), few fish species and few yellow perch individuals were found. At ponds with moderately low pH (4.0–4.8), numbers of yellow perch increased. Ponds with high pH (>4.8) had many other species and few yellow perch. Similar patterns for pH and fish community were observed for the two largest‐size classes. Negative interactions were observed between the medium‐ and large‐sized yellow perch and between the largest and smallest yellow perch, although interspecific interactions were weaker than expected. This examination of variability for an indicator species and its component‐size classes provides ecological understanding that can help frame the larger‐scale sampling programs needed for the conservation of freshwater fish.

  8. Determination of N-acetylglucosamine in cosmetic formulations and skin test samples by hydrophilic interaction liquid chromatography and UV detection.

    PubMed

    Pedrali, Alice; Bleve, Mariella; Capra, Priscilla; Jonsson, Tobias; Massolini, Gabriella; Perugini, Paola; Marrubini, Giorgio

    2015-03-25

    N-Acetylglucosamine is an ingredient in pharmaceuticals, nutritional supplements and in cosmetics. N-Acetylglucosamine in cosmetics is expected to improve skin hydration, reparation, and to contribute as anti-wrinkle agent. This study reports on the validation and application of an HPLC method based on HILIC and UV detection for determining N-acetylglucosamine in cosmetics and in samples obtained after testing the skin exposed to cosmetics formulations. The chromatographic column used is a ZIC(®)-pHILIC (150 mm × 4.6 mm, 5 μm particle size) on which a mobile phase containing acetonitrile-aqueous KH2PO4 (70:30, v/v) 15 mM was applied in isocratic elution mode injecting 20 μl of sample at 0.5 ml/min constant flow-rate and 10±1°C column temperature. Under these conditions the total run time was 10 min and N-acetylglucosamine eluted baseline separated from all other compounds in the samples. Calibration in the range from 40 to 80 μg/ml allowed to assess the method linearity (R(2)>0.999) in a concentration range corresponding to about 50% to 120% of the expected levels of N-acetylglucosamine in the formulations. Precision expressed by RSD% was always better than 2% in intra-day and inter-day assays of authentic samples. Accuracy was in all cases within 95-105% of the expected concentration value in formulations containing N-acetylglucosamine. The sensitivity of the method was at the level of 10 μg/ml as limit of detection, and at 40 μg/ml as limit of quantitation. The application of the method to formulations containing solid lipid nanoparticles documents its usefulness in cosmetic quality control. The results witness that the method is also suitable for the determination of N-acetylglucosamine in samples obtained from skin test strips. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Macrogeographic and microgeographic genetic structure of the Chagas' disease vector Triatoma infestans (Hemiptera: Reduviidae) from Catamarca, Argentina.

    PubMed

    Pérez de Rosas, Alicia R; Segura, Elsa L; Fichera, Laura; García, Beatriz Alicia

    2008-07-01

    The genetic structure in populations of the Chagas' disease vector Triatoma infestans from six localities belonging to areas under the same insecticide treatment conditions of Catamarca province (Argentina) was examined at macrogeographical and microgeographical scales. A total of 238 insects were typed for 10 polymorphic microsatellite loci. The average observed and expected heterozygosities ranged from 0.319 to 0.549 and from 0.389 to 0.689, respectively. The present results confirm that populations of T. infestans are highly structured. Spatial genetic structure was detectable at macrogeographical and microgeographical levels. Comparisons of the levels of genetic variability between two temporal samples were carried out to assess the impact of the insecticide treatment. The genetic diversity of the population was not significantly affected after insecticide use since different genetic parameters (allele number, observed and expected heterozygosities) remained stable. However, loss of low frequency alleles and not previously found alleles were detected. The effective population size (N(e)) estimated was substantially lower in the second temporal sample than in the first; nevertheless, it is possible that the size of the remnant population after insecticide treatment was still large enough to retain the genetic diversity. Very few individuals did not belong to the local T. infestans populations as determined by assignment analyses, suggesting a low level of immigration in the population. The results of the assignment and first-generation migrant tests suggest male-biased dispersal at microgeographical level.

  10. Indoor particle levels in small- and medium-sized commercial buildings in California.

    PubMed

    Wu, Xiangmei May; Apte, Michael G; Bennett, Deborah H

    2012-11-20

    This study monitored indoor and outdoor particle concentrations in 37 small and medium commercial buildings (SMCBs) in California with three buildings sampled on two occasions, resulting in 40 sampling days. Sampled buildings included offices, retail establishments, restaurants, dental offices, and hair salons, among others. Continuous measurements were made for both ultrafine and fine particulate matter as well as black carbon inside and outside of the building. Integrated PM(2.5), PM(2.5-10), and PM(10) samples were also collected inside and outside the building. The majority of the buildings had indoor/outdoor (I/O) particle concentration ratios less than 1.0, indicating that contributions from indoor sources are less than removal of outdoor particles. However, some of the buildings had I/O ratios greater than 1, indicating significant indoor particle sources. This was particularly true of restaurants, hair salons, and dental offices. The infiltration factor was estimated from a regression analysis of indoor and outdoor concentrations for each particle size fraction, finding lower values for ultrafine and coarse particles than for submicrometer particles, as expected. The I/O ratio of black carbon was used as a relative measure of the infiltration factor of particles among buildings, with a geometric mean of 0.62. The contribution of indoor sources to indoor particle levels was estimated for each building.

  11. On the comparison of the strength of morphological integration across morphometric datasets.

    PubMed

    Adams, Dean C; Collyer, Michael L

    2016-11-01

    Evolutionary morphologists frequently wish to understand the extent to which organisms are integrated, and whether the strength of morphological integration among subsets of phenotypic variables differ among taxa or other groups. However, comparisons of the strength of integration across datasets are difficult, in part because the summary measures that characterize these patterns (RV coefficient and r PLS ) are dependent both on sample size and on the number of variables. As a solution to this issue, we propose a standardized test statistic (a z-score) for measuring the degree of morphological integration between sets of variables. The approach is based on a partial least squares analysis of trait covariation, and its permutation-based sampling distribution. Under the null hypothesis of a random association of variables, the method displays a constant expected value and confidence intervals for datasets of differing sample sizes and variable number, thereby providing a consistent measure of integration suitable for comparisons across datasets. A two-sample test is also proposed to statistically determine whether levels of integration differ between datasets, and an empirical example examining cranial shape integration in Mediterranean wall lizards illustrates its use. Some extensions of the procedure are also discussed. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  12. Size distribution of rare earth elements in coal ash

    USGS Publications Warehouse

    Scott, Clinton T.; Deonarine, Amrika; Kolker, Allan; Adams, Monique; Holland, James F.

    2015-01-01

    Rare earth elements (REEs) are utilized in various applications that are vital to the automotive, petrochemical, medical, and information technology industries. As world demand for REEs increases, critical shortages are expected. Due to the retention of REEs during coal combustion, coal fly ash is increasingly considered a potential resource. Previous studies have demonstrated that coal fly ash is variably enriched in REEs relative to feed coal (e.g, Seredin and Dai, 2012) and that enrichment increases with decreasing size fractions (Blissett et al., 2014). In order to further explore the REE resource potential of coal ash, and determine the partitioning behavior of REE as a function of grain size, we studied whole coal and fly ash size-fractions collected from three U.S commercial-scale coal-fired generating stations burning Appalachian or Powder River Basin coal. Whole fly ash was separated into , 5 um, to 5 to 10 um and 10 to 100 um particle size fractions by mechanical shaking using trace-metal clean procedures. In these samples REE enrichments in whole fly ash ranges 5.6 to 18.5 times that of feedcoals. Partitioning results for size separates relative to whole coal and whole fly ash will also be reported. 

  13. Population demographics and genetic diversity in remnant and translocated populations of sea otters

    USGS Publications Warehouse

    Bodkin, James L.; Ballachey, Brenda E.; Cronin, M.A.; Scribner, K.T.

    1999-01-01

    The effects of small population size on genetic diversity and subsequent population recovery are theoretically predicted, but few empirical data are available to describe those relations. We use data from four remnant and three translocated sea otter (Enhydra lutris) populations to examine relations among magnitude and duration of minimum population size, population growth rates, and genetic variation. Metochondrial (mt)DNA haplotype diversity was correlated with the number of years at minimum population size (r = -0.741, p = 0.038) and minimum population size (r = 0.709, p = 0.054). We found no relation between population growth and haplotype diversity, altough growth was significantly greater in translocated than in remnant populations. Haplotype diversity in populations established from two sources was higher than in a population established from a single source and was higher than in the respective source populations. Haplotype frequencies in translocated populations of founding sizes of 4 and 28 differed from expected, indicating genetic drift and differential reproduction between source populations, whereas haplotype frequencies in a translocated population with a founding size of 150 did not. Relations between population demographics and genetic characteristics suggest that genetic sampling of source and translocated populations can provide valuable inferences about translocations.

  14. Scanning transmission electron microscopy analysis of Ge(O)/(graphitic carbon nitride) nanocomposite powder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawasaki, Masahiro; Sompetch, Kanganit; Sarakonsri, Thapanee, E-mail: tsarakonsri@gmail.com

    2015-12-15

    Analytical electron microscopy has revealed the structure of particles that were synthesized by chemical reaction of GeO{sub 2} with NaBH{sub 4} in the basic solution including graphitic carbon nitride (g-C{sub 3}N{sub 4}) powders. The g-C{sub 3}N{sub 4} was arranged by recrystallization of melamine at 600 °C under N{sub 2} gas atmosphere. The samples were dried at 60 °C or 180 °C for 4 h. The g-C{sub 3}N{sub 4} was observed as lamellae of several ten nm or less in size and had an amorphous-like structure with a distorted lattice in an area as small as a few hundred pm inmore » size. The reaction product was Ge(O) particles as fine as several nm in size and composed of Ge and O atoms. Most of the particles must be of GeO{sub 2−x} with the amorphous-like structure that has also a distorted lattice in an area of a few hundred pm in size. In the sample dried at 60 °C, the particles were found to be dispersed in a wide area on the g-C{sub 3}N{sub 4} lamella. It is hard to recognize those particles in TEM images. The particles in the sample dried at 180 °C became larger and were easily observed as isolated lumps. Hence, these powders can be regarded as GeO{sub 2}/g-C{sub 3}N{sub 4} or Ge/GeO{sub 2}/g-C{sub 3}N{sub 4} nanocomposites, and expected to be applicable to anode materials for high energy Li-ion batteries due to Ge catalysis effect, accordingly. - Graphical abstract: STEM analysis of Ge(O)/(graphitic carbon nitride) nanocomposite powder. - Highlights: • Graphitic (g)-C{sub 3}N{sub 4} powder was prepared at 600 °C by recrystallization of melamine. • Ge(O) was prepared by chemical reaction in a solution including the g-C{sub 3}N{sub 4} powders. • The products can be regarded as GeO{sub 2}/g-C{sub 3}N{sub 4} or Ge/GeO{sub 2}/g-C{sub 3}N{sub 4} nanocomposites. • GeO{sub 2} was amorphous several-nm particles and g-C{sub 3}N{sub 4} was amorphous lamella of several 10 nm in size. • We expect them to be applicable for high energy Li-ion battery anode materials.« less

  15. Population Size and the Rate of Language Evolution: A Test Across Indo-European, Austronesian, and Bantu Languages

    PubMed Central

    Greenhill, Simon J.; Hua, Xia; Welsh, Caela F.; Schneemann, Hilde; Bromham, Lindell

    2018-01-01

    What role does speaker population size play in shaping rates of language evolution? There has been little consensus on the expected relationship between rates and patterns of language change and speaker population size, with some predicting faster rates of change in smaller populations, and others expecting greater change in larger populations. The growth of comparative databases has allowed population size effects to be investigated across a wide range of language groups, with mixed results. One recent study of a group of Polynesian languages revealed greater rates of word gain in larger populations and greater rates of word loss in smaller populations. However, that test was restricted to 20 closely related languages from small Oceanic islands. Here, we test if this pattern is a general feature of language evolution across a larger and more diverse sample of languages from both continental and island populations. We analyzed comparative language data for 153 pairs of closely-related sister languages from three of the world's largest language families: Austronesian, Indo-European, and Niger-Congo. We find some evidence that rates of word loss are significantly greater in smaller languages for the Indo-European comparisons, but we find no significant patterns in the other two language families. These results suggest either that the influence of population size on rates and patterns of language evolution is not universal, or that it is sufficiently weak that it may be overwhelmed by other influences in some cases. Further investigation, for a greater number of language comparisons and a wider range of language features, may determine which of these explanations holds true. PMID:29755387

  16. Why Barbie Feels Heavier than Ken: The Influence of Size-Based Expectancies and Social Cues on the Illusory Perception of Weight

    ERIC Educational Resources Information Center

    Dijker, Anton J. M.

    2008-01-01

    In order to examine the relative influence of size-based expectancies and social cues on the perceived weight of objects, two studies were performed, using equally weighing dolls differing in sex-related and age-related vulnerability or physical strength cues. To increase variation in perceived size, stimulus objects were viewed through optical…

  17. Family Size Preferences in Europe and USA: Ultimate Expected Number of Children. Comparative Studies Number 26: ECE Analyses of Surveys in Europe and USA.

    ERIC Educational Resources Information Center

    Berent, Jerzy

    This survey analysis compares fertility levels in the United States and European countries, discusses socioeconomic influences in ultimate expected family size, and examines birth rate trends. The average number of ultimately expected children varies from 2.13 children per woman in Bulgaria to 2.80 in Spain. Eighty to 90 percent of U.S. and…

  18. Determining quantity and quality of retained oil in mature marly chalk and marlstone of the Cretaceous Niobrara Formation by low-temperature hydrous pyrolysis

    USGS Publications Warehouse

    Lewan, Michael; Sonnenfeld, Mark D.

    2017-01-01

    Low-temperature hydrous pyrolysis (LTHP) at 300°C (572°F) for 24 h released retained oils from 12- to 20-meshsize samples of mature Niobrara marly chalk and marlstone cores. The released oil accumulated on the water surface of the reactor, and is compositionally similar to oil produced from the same well. The quantities of oil released from the marly chalk and marlstone by LTHP are respectively 3.4 and 1.6 times greater than those determined by tight rock analyses (TRA) on aliquots of the same samples. Gas chromatograms indicated this difference is a result of TRA oils losing more volatiles and volatilizing less heavy hydrocarbons during collection than LTHP oils. Characterization of the rocks before and after LTPH by programmable open-system pyrolysis (HAWK) indicate that under LTHP conditions no significant oil is generated and only preexisting retained oil is released. Although LTHP appears to provide better predictions of quantity and quality of retained oil in a mature source rock, it is not expected to replace the more time and sample-size efficacy of TRA. However, LTHP can be applied to composited samples from key intervals or lithologies originally recognized by TRA. Additional studies on duration, temperature, and sample size used in LTHP may further optimize its utility.

  19. Testing the non-unity of rate ratio under inverse sampling.

    PubMed

    Tang, Man-Lai; Liao, Yi Jie; Ng, Hong Keung Tony; Chan, Ping Shing

    2007-08-01

    Inverse sampling is considered to be a more appropriate sampling scheme than the usual binomial sampling scheme when subjects arrive sequentially, when the underlying response of interest is acute, and when maximum likelihood estimators of some epidemiologic indices are undefined. In this article, we study various statistics for testing non-unity rate ratios in case-control studies under inverse sampling. These include the Wald, unconditional score, likelihood ratio and conditional score statistics. Three methods (the asymptotic, conditional exact, and Mid-P methods) are adopted for P-value calculation. We evaluate the performance of different combinations of test statistics and P-value calculation methods in terms of their empirical sizes and powers via Monte Carlo simulation. In general, asymptotic score and conditional score tests are preferable for their actual type I error rates are well controlled around the pre-chosen nominal level, and their powers are comparatively the largest. The exact version of Wald test is recommended if one wants to control the actual type I error rate at or below the pre-chosen nominal level. If larger power is expected and fluctuation of sizes around the pre-chosen nominal level are allowed, then the Mid-P version of Wald test is a desirable alternative. We illustrate the methodologies with a real example from a heart disease study. (c) 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

  20. Magnetic microscopic imaging with an optically pumped magnetometer and flux guides

    DOE PAGES

    Kim, Young Jin; Savukov, Igor Mykhaylovich; Huang, Jen -Huang; ...

    2017-01-23

    Here, by combining an optically pumped magnetometer (OPM) with flux guides (FGs) and by installing a sample platform on automated translation stages, we have implemented an ultra-sensitive FG-OPM scanning magnetic imaging system that is capable of detecting magnetic fields of ~20 pT with spatial resolution better than 300 μm (expected to reach ~10 pT sensitivity and ~100 μm spatial resolution with optimized FGs). As a demonstration of one possible application of the FG-OPM device, we conducted magnetic imaging of micron-size magnetic particles. Magnetic imaging of such particles, including nano-particles and clusters, is very important for many fields, especially for medicalmore » cancer diagnostics and biophysics applications. For rapid, precise magnetic imaging, we constructed an automatic scanning system, which holds and moves a target sample containing magnetic particles at a given stand-off distance from the FG tips. We show that the device was able to produce clear microscopic magnetic images of 10 μm-size magnetic particles. In addition, we also numerically investigated how the magnetic flux from a target sample at a given stand-off distance is transmitted to the OPM vapor cell.« less

  1. Alcohol marketing research: the need for a new agenda.

    PubMed

    Meier, Petra S

    2011-03-01

    This paper aims to contribute to a rethink of marketing research priorities to address policy makers' evidence needs in relation to alcohol marketing. Discussion paper reviewing evidence gaps identified during an appraisal of policy options to restrict alcohol marketing. Evidence requirements can be categorized as follows: (i) the size of marketing effects for the whole population and for policy-relevant population subgroups, (ii) the balance between immediate and long-term effects and the time lag, duration and cumulative build-up of effects and (iii) comparative effects of partial versus comprehensive marketing restrictions on consumption and harm. These knowledge gaps impede the appraisal and evaluation of existing and new interventions, because without understanding the size and timing of expected effects, researchers may choose inadequate time-frames, samples or sample sizes. To date, research has tended to rely on simplified models of marketing and has focused disproportionately on youth populations. The effects of cumulative exposure across multiple marketing channels, targeting of messages at certain population groups and indirect effects of advertising on consumption remain unclear. It is essential that studies into marketing effect sizes are geared towards informing policy decision-makers, anchored strongly in theory, use measures of effect that are well-justified and recognize fully the complexities of alcohol marketing efforts. © 2010 The Author, Addiction © 2010 Society for the Study of Addiction.

  2. Comparing the Pearson and Spearman correlation coefficients across distributions and sample sizes: A tutorial using simulations and empirical data.

    PubMed

    de Winter, Joost C F; Gosling, Samuel D; Potter, Jeff

    2016-09-01

    The Pearson product–moment correlation coefficient ( r p ) and the Spearman rank correlation coefficient ( r s ) are widely used in psychological research. We compare r p and r s on 3 criteria: variability, bias with respect to the population value, and robustness to an outlier. Using simulations across low (N = 5) to high (N = 1,000) sample sizes we show that, for normally distributed variables, r p and r s have similar expected values but r s is more variable, especially when the correlation is strong. However, when the variables have high kurtosis, r p is more variable than r s . Next, we conducted a sampling study of a psychometric dataset featuring symmetrically distributed data with light tails, and of 2 Likert-type survey datasets, 1 with light-tailed and the other with heavy-tailed distributions. Consistent with the simulations, r p had lower variability than r s in the psychometric dataset. In the survey datasets with heavy-tailed variables in particular, r s had lower variability than r p , and often corresponded more accurately to the population Pearson correlation coefficient ( R p ) than r p did. The simulations and the sampling studies showed that variability in terms of standard deviations can be reduced by about 20% by choosing r s instead of r p . In comparison, increasing the sample size by a factor of 2 results in a 41% reduction of the standard deviations of r s and r p . In conclusion, r p is suitable for light-tailed distributions, whereas r s is preferable when variables feature heavy-tailed distributions or when outliers are present, as is often the case in psychological research. PsycINFO Database Record (c) 2016 APA, all rights reserved

  3. Knowledge and attitude towards total knee arthroplasty among the public in Saudi Arabia: a nationwide population-based study.

    PubMed

    Al-Mohrej, Omar A; Alshammari, Faris O; Aljuraisi, Abdulrahman M; Bin Amer, Lujain A; Masuadi, Emad M; Al-Kenani, Nader S

    2018-04-01

    Studies on total knee arthroplasty (TKA) in Saudi Arabia are scarce, and none have reported the knowledge and attitude of the procedure in Saudi Arabia. Our study aims to measure the knowledge and attitude of TKA among the adult Saudi population. To encompass a representative sample of this cross-sectional survey, all 13 administrative areas were used as ready-made geographical clusters. For each cluster, stratified random sampling was performed to maximize participation in the study. In each area, random samples of mobile phone numbers were selected with a probability proportional to the administrative area population size. Sample size calculation was based on the assumption that 50% of the participants would have some level of knowledge, with a 2% margin of error and 95% confidence level. To reach our intended sample size of 1540, we contacted 1722 participants with a response rate of 89.4%. The expected percentage of public knowledge was 50%; however, the actual percentage revealed by this study was much lower (29.7%). A stepwise multiple logistic regression was used to assess the factors that positively affected the knowledge score regarding TKA. Age [P = 0.016 with OR of 0.47], higher income [P = 0.001 with OR of 0.52] and participants with a positive history of TKA or that have known someone who underwent the surgery [P < 0.001 with OR of 0.15] had a positive impact on the total knowledge score. There are still misconceptions among the public in Saudi Arabia concerning TKA, its indications and results. We recommend that doctors use the results of our survey to assess their conversations with their patients, and to determine whether the results of the procedure are adequately clarified.

  4. Generation and Characterization of Nanoaerosols Using a Portable Scanning Mobility Particle Sizer and Electron Microscopy

    NASA Astrophysics Data System (ADS)

    Marty, Adam J.

    The purpose of this research is to demonstrate the ability to generate and characterize a nanometer sized aerosol using solutions, suspensions, and a bulk nanopowder, and to research the viability of using an acoustic dry aerosol generator/elutriator (ADAGE) to aerosolize a bulk nanopowder into a nanometer sized aerosol. The research compares the results from a portable scanning mobility particle sizer (SMPS) to the more traditional method of counting and sizing particles on a filter sample using scanning electron microscopy (SEM). Sodium chloride aerosol was used for the comparisons. The sputter coating thickness, a conductive coating necessary for SEM, was measured on different sizes of polystyrene latex spheres (PSLS). Aluminum oxide powder was aerosolized using an ADAGE and several different support membranes and sound frequency combinations were explored. A portable SMPS was used to determine the size distributions of the generated aerosols. Polycarbonate membrane (PCM) filter samples were collected for subsequent SEM analysis. The particle size distributions were determined from photographs of the membrane filters. SMPS data and membrane samples were collected simultaneously. The sputter coating thicknesses on four different sizes of PSLS, range 57 nanometers (nm) to 220 nm, were measured using transmission electron microscopy and the results from the SEM and SMPS were compared after accounting for the sputter coating thickness. Aluminum oxide nanopowder (20 nm) was aerosolized using a modified ADAGE technique. Four different support membranes and four different sound frequencies were tested with the ADAGE. The aerosol was collected onto PCM filters and the samples were examined using SEM. The results indicate that the SMPS and SEM distributions were log-normally distributed with a median diameter of approximately 42 nm and 55 nm, respectively, and geometric standard deviations (GSD) of approximately 1.6 and 1.7, respectively. The two methods yielded similar distributional trends with a difference in median diameters of approximately 11 -- 15 nm. The sputter coating thickness on the different sizes of PSLSs ranged from 15.4 -- 17.4 nm. The aerosols generated, using the modified ADAGE, were low in concentration. The particles remained as agglomerates and varied widely in size. An aluminum foil support membrane coupled with a high sound frequency generated the smallest agglomerates. A well characterized sodium chloride aerosol was generated and was reproducible. The distributions determined using SEM were slightly larger than those obtained from SMPS, however, the distributions had relatively the same shape as reflected in their GSDs. This suggests that a portable SMPS is a suitable method for characterizing a nanoaerosol. The sizing techniques could be compared after correcting for the effects of the sputter coating necessary for SEM examination. It was determined that the sputter coating thickness on nano-sized particles and particles up to approximately 220 nm can be expected to be the same and that the sputter coating can add considerably to the size of a nanoparticle. This has important implications for worker health where nanoaerosol exposure is a concern. The sputter coating must be considered when SEM is used to describe a nanoaerosol exposure. The performance of the modified ADAGE was less than expected. The low aerosol output from the ADAGE prevented a more detailed analysis and was limited to only a qualitative comparison. Some combinations of support membranes and sound frequencies performed better than others, particularly conductive support membranes and high sound frequencies. In conclusion, a portable SMPS yielded results similar to those obtained by SEM. The sputter coating was the same thickness on the PSLSs studied. The sputter coating thickness must be considered when characterizing nanoparticles using SEM. Finally, a conductive support membrane and higher frequencies appeared to generate the smallest agglomerates using the ADAGE technique.

  5. On the choice of statistical models for estimating occurrence and extinction from animal surveys

    USGS Publications Warehouse

    Dorazio, R.M.

    2007-01-01

    In surveys of natural animal populations the number of animals that are present and available to be detected at a sample location is often low, resulting in few or no detections. Low detection frequencies are especially common in surveys of imperiled species; however, the choice of sampling method and protocol also may influence the size of the population that is vulnerable to detection. In these circumstances, probabilities of animal occurrence and extinction will generally be estimated more accurately if the models used in data analysis account for differences in abundance among sample locations and for the dependence between site-specific abundance and detection. Simulation experiments are used to illustrate conditions wherein these types of models can be expected to outperform alternative estimators of population site occupancy and extinction. ?? 2007 by the Ecological Society of America.

  6. Reproduction in female copperhead snakes (Agkistrodon contortrix): plasma steroid profiles during gestation and post-birth periods.

    PubMed

    Smith, Charles F; Schuett, Gordon W; Hoss, Shannon K

    2012-04-01

    We investigated levels of plasma progesterone (P4), 17β-estradiol (E2), testosterone (T), and corticosterone (CORT) during gestation and post-birth periods in wild-collected female copperhead snakes (Viperidae; Agkistrodon contortrix). We also sought to determine whether CORT levels at (or near) birth dramatically increase and were correlated with duration of labor and litter size. Specifically, pregnant subjects (N = 14) were collected during early- to mid-gestation, held in the laboratory, and repeatedly bled to obtain plasma for steroid analyses. Progesterone showed significant changes during gestation, with the highest levels at the onset of sampling (circa 50 days prior to birth); P4 progressively declined up to parturition, and basal levels were observed thereafter. At the onset of sampling, E2 was at peak levels and fell sharply at circa 30 days prior to birth, a trend observed throughout the post-birth sampling period. Throughout the entire sampling period, T was undetectable. Although CORT showed no significant changes during gestation and several days following parturition, there was a highly significant peak at the time of birth. Our findings mirror the results of previous studies on pregnancy and steroid hormones of other live-bearing snakes, lizards, and mammals. As expected, there was a significant relationship between duration of labor and litter size; however, although levels of CORT did not achieve significance, there was a positive trend with litter size. We suggest that elevation of CORT at birth is involved in the mobilization and regulation of energy stores necessary for the physiological process of parturition and as a possible mechanism to trigger birth.

  7. South Pole-Aitken Sample Return Mission: Collecting Mare Basalts from the Far Side of the Moon

    NASA Technical Reports Server (NTRS)

    Gillis, J. J.; Jolliff, B. L.; Lucey, P. G.

    2003-01-01

    We consider the probability that a sample mission to a site within the South Pole-Aitken Basin (SPA) would return basaltic material. A sample mission to the SPA would be the first opportunity to sample basalts from the far side of the Moon. The near side basalts are more abundant in terms of volume and area than their far-side counterparts (16:1), and the basalt deposits within SPA represent approx. 28% of the total basalt surface area on the far side. Sampling far-side basalts is of particular importance because as partial melts of the mantle, they could have derived from a mantle that is mineralogically and chemically different than determined for the nearside, as would be expected if the magma ocean solidified earlier on the far side. For example, evidence to support the existence of high-Th basalts like those that appear to be common on the nearside in the Procellarum KREEP Terrane has been found. Although SPA is the deepest basin on the Moon, it is not extensively filled with mare basalt, as might be expected if similar amounts of partial melting occurred in the mantle below SPA as for basins on the near side. These observations may mean that mantle beneath the far-side crust is lower in Th and other heat producing elements than the nearside. One proposed location for a sample-return landing site is 60 S, 160 W. This site was suggested to maximize the science return with respect to sampling crustal material and SPA impact melt, however, basaltic samples would undoubtedly occur there. On the basis of Apollo samples, we should expect that basaltic materials would be found in the vicinity of any landing site within SPA, even if located away from mare deposits. For example, the Apollo 16 mission landed in an ancient highlands region 250-300 km away from the nearest mare-highlands boundary yet it still contains a small component of basaltic samples (20 lithic fragments ranging is size from <1 to .01 cm). A soil sample from the floor of SPA will likely contain an assortment of basaltic fragments from surrounding regions. In terms both of selecting the best landing sites and understanding the geologic context for returned samples, it is important to understand the compositional distribution of basalts within SPA basin.

  8. Forecasting eruption size: what we know, what we don't know

    NASA Astrophysics Data System (ADS)

    Papale, Paolo

    2017-04-01

    Any eruption forecast includes an evaluation of the expected size of the forthcoming eruption, usually expressed as the probability associated to given size classes. Such evaluation is mostly based on the previous volcanic history at the specific volcano, or it is referred to a broader class of volcanoes constituting "analogues" of the one under specific consideration. In any case, use of knowledge from past eruptions implies considering the completeness of the reference catalogue, and most importantly, the existence of systematic biases in the catalogue, that may affect probability estimates and translate into biased volcanic hazard forecasts. An analysis of existing catalogues, with major reference to the catalogue from the Smithsonian Global Volcanism Program, suggests that systematic biases largely dominate at global, regional and local scale: volcanic histories reconstructed at individual volcanoes, often used as a reference for volcanic hazard forecasts, are the result of systematic loss of information with time and poor sample representativeness. That situation strictly requires the use of techniques to complete existing catalogues, as well as careful consideration of the uncertainties deriving from inadequate knowledge and model-dependent data elaboration. A reconstructed global eruption size distribution, obtained by merging information from different existing catalogues, shows a mode in the VEI 1-2 range, <0.1% incidence of eruptions with VEI 7 or larger, and substantial uncertainties associated with individual VEI frequencies. Even larger uncertainties are expected to derive from application to individual volcanoes or classes of analogue volcanoes, suggesting large to very large uncertainties associated to volcanic hazard forecasts virtually at any individual volcano worldwide.

  9. Comparative analyses of basal rate of metabolism in mammals: data selection does matter.

    PubMed

    Genoud, Michel; Isler, Karin; Martin, Robert D

    2018-02-01

    Basal rate of metabolism (BMR) is a physiological parameter that should be measured under strictly defined experimental conditions. In comparative analyses among mammals BMR is widely used as an index of the intensity of the metabolic machinery or as a proxy for energy expenditure. Many databases with BMR values for mammals are available, but the criteria used to select metabolic data as BMR estimates have often varied and the potential effect of this variability has rarely been questioned. We provide a new, expanded BMR database reflecting compliance with standard criteria (resting, postabsorptive state; thermal neutrality; adult, non-reproductive status for females) and examine potential effects of differential selectivity on the results of comparative analyses. The database includes 1739 different entries for 817 species of mammals, compiled from the original sources. It provides information permitting assessment of the validity of each estimate and presents the value closest to a proper BMR for each entry. Using different selection criteria, several alternative data sets were extracted and used in comparative analyses of (i) the scaling of BMR to body mass and (ii) the relationship between brain mass and BMR. It was expected that results would be especially dependent on selection criteria with small sample sizes and with relatively weak relationships. Phylogenetically informed regression (phylogenetic generalized least squares, PGLS) was applied to the alternative data sets for several different clades (Mammalia, Eutheria, Metatheria, or individual orders). For Mammalia, a 'subsampling procedure' was also applied, in which random subsamples of different sample sizes were taken from each original data set and successively analysed. In each case, two data sets with identical sample size and species, but comprising BMR data with different degrees of reliability, were compared. Selection criteria had minor effects on scaling equations computed for large clades (Mammalia, Eutheria, Metatheria), although less-reliable estimates of BMR were generally about 12-20% larger than more-reliable ones. Larger effects were found with more-limited clades, such as sciuromorph rodents. For the relationship between BMR and brain mass the results of comparative analyses were found to depend strongly on the data set used, especially with more-limited, order-level clades. In fact, with small sample sizes (e.g. <100) results often appeared erratic. Subsampling revealed that sample size has a non-linear effect on the probability of a zero slope for a given relationship. Depending on the species included, results could differ dramatically, especially with small sample sizes. Overall, our findings indicate a need for due diligence when selecting BMR estimates and caution regarding results (even if seemingly significant) with small sample sizes. © 2017 Cambridge Philosophical Society.

  10. A fast least-squares algorithm for population inference

    PubMed Central

    2013-01-01

    Background Population inference is an important problem in genetics used to remove population stratification in genome-wide association studies and to detect migration patterns or shared ancestry. An individual’s genotype can be modeled as a probabilistic function of ancestral population memberships, Q, and the allele frequencies in those populations, P. The parameters, P and Q, of this binomial likelihood model can be inferred using slow sampling methods such as Markov Chain Monte Carlo methods or faster gradient based approaches such as sequential quadratic programming. This paper proposes a least-squares simplification of the binomial likelihood model motivated by a Euclidean interpretation of the genotype feature space. This results in a faster algorithm that easily incorporates the degree of admixture within the sample of individuals and improves estimates without requiring trial-and-error tuning. Results We show that the expected value of the least-squares solution across all possible genotype datasets is equal to the true solution when part of the problem has been solved, and that the variance of the solution approaches zero as its size increases. The Least-squares algorithm performs nearly as well as Admixture for these theoretical scenarios. We compare least-squares, Admixture, and FRAPPE for a variety of problem sizes and difficulties. For particularly hard problems with a large number of populations, small number of samples, or greater degree of admixture, least-squares performs better than the other methods. On simulated mixtures of real population allele frequencies from the HapMap project, Admixture estimates sparsely mixed individuals better than Least-squares. The least-squares approach, however, performs within 1.5% of the Admixture error. On individual genotypes from the HapMap project, Admixture and least-squares perform qualitatively similarly and within 1.2% of each other. Significantly, the least-squares approach nearly always converges 1.5- to 6-times faster. Conclusions The computational advantage of the least-squares approach along with its good estimation performance warrants further research, especially for very large datasets. As problem sizes increase, the difference in estimation performance between all algorithms decreases. In addition, when prior information is known, the least-squares approach easily incorporates the expected degree of admixture to improve the estimate. PMID:23343408

  11. A fast least-squares algorithm for population inference.

    PubMed

    Parry, R Mitchell; Wang, May D

    2013-01-23

    Population inference is an important problem in genetics used to remove population stratification in genome-wide association studies and to detect migration patterns or shared ancestry. An individual's genotype can be modeled as a probabilistic function of ancestral population memberships, Q, and the allele frequencies in those populations, P. The parameters, P and Q, of this binomial likelihood model can be inferred using slow sampling methods such as Markov Chain Monte Carlo methods or faster gradient based approaches such as sequential quadratic programming. This paper proposes a least-squares simplification of the binomial likelihood model motivated by a Euclidean interpretation of the genotype feature space. This results in a faster algorithm that easily incorporates the degree of admixture within the sample of individuals and improves estimates without requiring trial-and-error tuning. We show that the expected value of the least-squares solution across all possible genotype datasets is equal to the true solution when part of the problem has been solved, and that the variance of the solution approaches zero as its size increases. The Least-squares algorithm performs nearly as well as Admixture for these theoretical scenarios. We compare least-squares, Admixture, and FRAPPE for a variety of problem sizes and difficulties. For particularly hard problems with a large number of populations, small number of samples, or greater degree of admixture, least-squares performs better than the other methods. On simulated mixtures of real population allele frequencies from the HapMap project, Admixture estimates sparsely mixed individuals better than Least-squares. The least-squares approach, however, performs within 1.5% of the Admixture error. On individual genotypes from the HapMap project, Admixture and least-squares perform qualitatively similarly and within 1.2% of each other. Significantly, the least-squares approach nearly always converges 1.5- to 6-times faster. The computational advantage of the least-squares approach along with its good estimation performance warrants further research, especially for very large datasets. As problem sizes increase, the difference in estimation performance between all algorithms decreases. In addition, when prior information is known, the least-squares approach easily incorporates the expected degree of admixture to improve the estimate.

  12. Effect Of Neodymium Substitution In Structural Characteristics Of Magnesium Ferrite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thankachan, Smitha; Binu, P. J.; Xavier, Sheena

    2011-10-20

    The effect of Nd{sup 3+} substitution on the structural properties of Magnesium ferrite was studied in the series MgNd{sub x}Fe{sub 2-x}O{sub 4}, where x = 0 to 0.3 in steps of 0.05. The series was prepared by sol-gel technique which is one of the novel technique to prepare nanosized samples. Structural characterization was done using X-ray diffractometer and Fourier Transform Infrared Spectrometer. XRD analysis reveals the prepared samples are single phasic till x = 0.2. From x0 = .25, a secondary phase of iron neodymium oxide appears along with the spinel phase. Particle size calculation shows the prepared samples aremore » in the 9nm to 11 nm regime. Lattice parameter was found to increase with concentration of Nd. XRD and FTIR analysis confirmed spinel structure of the prepared samples. XRF result shows the expected composition of prepared samples. The frequency dependence of the dielectric constant in the range 100 Hz--120MHz was also studied« less

  13. Effect of Carbonate Matrix on δ15N Analysis Tested for Simple Bulk Combustion on Coupled Elemental Analyzer-GC-IRMS

    NASA Astrophysics Data System (ADS)

    Saxena, D.; Grossman, E. L.; Maupin, C. R.; Roark, B.; O'Dea, A.

    2016-12-01

    Nitrogen isotopes (15N/14N) have been extensively used to reconstruct trophic structure, anthropogenic nutrient loading, ecosystem dynamics, and nutrient cycling in terrestrial and marine systems. Extending similar efforts to deep time is critical to investigate sources and fluxes of nutrients in past oceans, and explore causes of biotic turnover. To test the fidelity of N-isotope analyses of biogenic carbonate samples by simple bulk combustion, we performed two sets of experiments involving varying proportions of reagent CaCO3 (0, 2, 35 mg) and three organic standards (3.7-47.2 µg) viz. USGS40 (δ15NAir = -4.52‰), USGS41 (δ15NAir = +47.57‰), and in-house standard Rice (δ15NAir = +1.18‰). At high N contents (15-47.2 µg), δ15N values for CaCO3-amended samples are consistently either 0.5‰ higher (USGS40, -4.5‰), equivalent (Rice, 1.2‰), or 0.5‰ lower (USGS41, 47.6‰) relative to unamended samples. The difference thus depends on the δ15N of the standard relative to air. With decreasing N content (10-15 µg), δ15N values for CaCO3-amended samples diverge from expected values, with 35 mg CaCO3 samples diverging at the highest N content and 0 mg CaCO3 samples at the lowest (10 µg). The latter matches the lower sample-size limit for accurate measurement under the experimental conditions. At very low sample size (3.7-10 µg), all unamended standards show decreasing δ15N with decreasing N content, presumably because of non-linearity in instrument electronics and ion source behavior. The δ15N values of amended USGS41 also decrease with decreasing N content, but those of amended USGS40 and Rice samples increase, with samples containing more CaCO3 (35 versus 2 mg) showing greater deviation from expected values. Potential causes for deviation in δ15N values with CaCO3 amendments include N2 contamination from tin capsules and reagent CaCO3, and incomplete combustion due to energy consumption during CaCO3 decomposition. While tin capsules and reagent CaCO3 provide some N background (0.07 Vs and 0.23 Vs [40 mg CaCO3] respectively), mass balance considerations suggest incomplete combustion likely caused the deviation from true values. Nevertheless, for higher N content samples reliable δ15N measurements can be made with simple bulk combustion of carbonate.

  14. Simulating the Generalized Gibbs Ensemble (GGE): A Hilbert space Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Alba, Vincenzo

    By combining classical Monte Carlo and Bethe ansatz techniques we devise a numerical method to construct the Truncated Generalized Gibbs Ensemble (TGGE) for the spin-1/2 isotropic Heisenberg (XXX) chain. The key idea is to sample the Hilbert space of the model with the appropriate GGE probability measure. The method can be extended to other integrable systems, such as the Lieb-Liniger model. We benchmark the approach focusing on GGE expectation values of several local observables. As finite-size effects decay exponentially with system size, moderately large chains are sufficient to extract thermodynamic quantities. The Monte Carlo results are in agreement with both the Thermodynamic Bethe Ansatz (TBA) and the Quantum Transfer Matrix approach (QTM). Remarkably, it is possible to extract in a simple way the steady-state Bethe-Gaudin-Takahashi (BGT) roots distributions, which encode complete information about the GGE expectation values in the thermodynamic limit. Finally, it is straightforward to simulate extensions of the GGE, in which, besides the local integral of motion (local charges), one includes arbitrary functions of the BGT roots. As an example, we include in the GGE the first non-trivial quasi-local integral of motion.

  15. Empirical evaluation of predator-driven diel vertical migration in Lake Superior

    USGS Publications Warehouse

    Stockwell, J.D.; Hrabik, T.R.; Jensen, O.P.; Yule, D.L.; Balge, M.

    2010-01-01

    Recent studies on Lake Superior suggest that diel vertical migration (DVM) of prey (generalized Coregonus spp.) may be influenced by the density of predatory siscowet (Salvelinus namaycush). We empirically evaluated this hypothesis using data from acoustic, midwater trawl, and bottom trawl sampling at eight Lake Superior sites during three seasons in 2005 and a subset of sites in 2006. We expected the larger-bodied cisco (Coregonus artedi) to exhibit a shallower DVM compared with the smaller-bodied kiyi (Coregonus kiyi). Although DVM of kiyi and cisco were consistent with expectations of DVM as a size-dependent, predator-mediated process, we found no relationship between siscowet density and the magnitude of DVM of either coregonid. Cisco appear to have a size refuge from siscowet predation. Kiyi and siscowet co-occur in demersal habitat > 150 m during the day, where visual predation is unlikely, suggesting predator avoidance is not a factor in the daytime distribution of kiyi. Seasonal patterns of kiyi DVM were consistent with reported DVM of their primary prey Mysis relicta. Our results suggest that consideration of nonvisual foraging, rather than lightbased foraging theory (i.e., the antipredation window), is necessary to understand the processes driving DVM in deepwater systems.

  16. Time-resolved orbital angular momentum spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noyan, Mehmet A.; Kikkawa, James M.

    We introduce pump-probe magneto-orbital spectroscopy, wherein Laguerre-Gauss optical pump pulses impart orbital angular momentum to the electronic states of a material and subsequent dynamics are studied with 100 fs time resolution. The excitation uses vortex modes that distribute angular momentum over a macroscopic area determined by the spot size, and the optical probe studies the chiral imbalance of vortex modes reflected off the sample. First observations in bulk GaAs yield transients that evolve on time scales distinctly different from population and spin relaxation, as expected, but with surprisingly large lifetimes.

  17. Small-angle Neutron Scattering Study of Magnetic Ordering and Inhomogeneity Across the Martensitic Phase Transformation in Ni50-xCoxMn40Sn10 Alloys

    DTIC Science & Technology

    2012-04-27

    Oe to 70 kOe. For low-field measurements the remnant field profile in the superconducting magnet was measured and the field at the sample nulled to 1... magnetization similar to the bulk magnetization in the austenite. In particular, the saturation magnetization of such a nanoscopic F cluster would be...expected to be significantly smaller than the bulk magnetization of 5 μB/f.u., due to both finite-size effects and competition between F and AF exchange

  18. Sampling Mars: Analytical requirements and work to do in advance

    NASA Technical Reports Server (NTRS)

    Koeberl, Christian

    1988-01-01

    Sending a mission to Mars to collect samples and return them to the Earth for analysis is without doubt one of the most exciting and important tasks for planetary science in the near future. Many scientifically important questions are associated with the knowledge of the composition and structure of Martian samples. Amongst the most exciting questions is the clarification of the SNC problem- to prove or disprove a possible Martian origin of these meteorites. Since SNC meteorites have been used to infer the chemistry of the planet Mars, and its evolution (including the accretion history), it would be important to know if the whole story is true. But before addressing possible scientific results, we have to deal with the analytical requirements, and with possible pre-return work. It is unlikely to expect that a possible Mars sample return mission will bring back anything close to the amount returned by the Apollo missions. It will be more like the amount returned by the Luna missions, or at least in that order of magnitude. This requires very careful sample selection, and very precise analytical techniques. These techniques should be able to use minimal sample sizes and on the other hand optimize the scientific output. The possibility to work with extremely small samples should not obstruct another problem: possible sampling errors. As we know from terrestrial geochemical studies, sampling procedures are quite complicated and elaborate to ensure avoiding sampling errors. The significance of analyzing a milligram or submilligram sized sample and putting that in relationship with the genesis of whole planetary crusts has to be viewed with care. This leaves a dilemma on one hand, to minimize the sample size as far as possible in order to have the possibility of returning as many different samples as possible, and on the other hand to take a sample large enough to be representative. Whole rock samples are very useful, but should not exceed the 20 to 50 g range, except in cases of extreme inhomogeneity, because for larger samples the information tends to become redundant. Soil samples should be in the 2 to 10 g range, permitting the splitting of the returned samples for studies in different laboratories with variety of techniques.

  19. Lot quality assurance sampling (LQAS) for monitoring a leprosy elimination program.

    PubMed

    Gupte, M D; Narasimhamurthy, B

    1999-06-01

    In a statistical sense, prevalences of leprosy in different geographical areas can be called very low or rare. Conventional survey methods to monitor leprosy control programs, therefore, need large sample sizes, are expensive, and are time-consuming. Further, with the lowering of prevalence to the near-desired target level, 1 case per 10,000 population at national or subnational levels, the program administrator's concern will be shifted to smaller areas, e.g., districts, for assessment and, if needed, for necessary interventions. In this paper, Lot Quality Assurance Sampling (LQAS), a quality control tool in industry, is proposed to identify districts/regions having a prevalence of leprosy at or above a certain target level, e.g., 1 in 10,000. This technique can also be considered for identifying districts/regions at or below the target level of 1 per 10,000, i.e., areas where the elimination level is attained. For simulating various situations and strategies, a hypothetical computerized population of 10 million persons was created. This population mimics the actual population in terms of the empirical information on rural/urban distributions and the distribution of households by size for the state of Tamil Nadu, India. Various levels with respect to leprosy prevalence are created using this population. The distribution of the number of cases in the population was expected to follow the Poisson process, and this was also confirmed by examination. Sample sizes and corresponding critical values were computed using Poisson approximation. Initially, villages/towns are selected from the population and from each selected village/town households are selected using systematic sampling. Households instead of individuals are used as sampling units. This sampling procedure was simulated 1000 times in the computer from the base population. The results in four different prevalence situations meet the required limits of Type I error of 5% and 90% Power. It is concluded that after validation under field conditions, this method can be considered for a rapid assessment of the leprosy situation.

  20. Release of mineral-bound water prior to subduction tied to shallow seismogenic slip off Sumatra

    NASA Astrophysics Data System (ADS)

    Hüpers, Andre; Torres, Marta E.; Owari, Satoko; McNeill, Lisa C.; Dugan, Brandon; Henstock, Timothy J.; Milliken, Kitty L.; Petronotis, Katerina E.; Backman, Jan; Bourlange, Sylvain; Chemale, Farid; Chen, Wenhuang; Colson, Tobias A.; Frederik, Marina C. G.; Guèrin, Gilles; Hamahashi, Mari; House, Brian M.; Jeppson, Tamara N.; Kachovich, Sarah; Kenigsberg, Abby R.; Kuranaga, Mebae; Kutterolf, Steffen; Mitchison, Freya L.; Mukoyoshi, Hideki; Nair, Nisha; Pickering, Kevin T.; Pouderoux, Hugo F. A.; Shan, Yehua; Song, Insun; Vannucchi, Paola; Vrolijk, Peter J.; Yang, Tao; Zhao, Xixi

    2017-05-01

    Plate-boundary fault rupture during the 2004 Sumatra-Andaman subduction earthquake extended closer to the trench than expected, increasing earthquake and tsunami size. International Ocean Discovery Program Expedition 362 sampled incoming sediments offshore northern Sumatra, revealing recent release of fresh water within the deep sediments. Thermal modeling links this freshening to amorphous silica dehydration driven by rapid burial-induced temperature increases in the past 9 million years. Complete dehydration of silicates is expected before plate subduction, contrasting with prevailing models for subduction seismogenesis calling for fluid production during subduction. Shallow slip offshore Sumatra appears driven by diagenetic strengthening of deeply buried fault-forming sediments, contrasting with weakening proposed for the shallow Tohoku-Oki 2011 rupture, but our results are applicable to other thickly sedimented subduction zones including those with limited earthquake records.

  1. The bridge between real and ideal: students perception on quality gap in reality and their educational expectations.

    PubMed

    Nabilou, Bahram; Khorasani-Zavareh, Davoud

    2014-09-01

    Studies in higher education indicated that students' expectation for their educational services are not provided sufficiently, particularly in developing countries that implies on gap between the students perception on current situation and their expectations from educational services. The aim of this study was to determine the gap between student perception and expectations of students in various levels of the undergraduate educational courses at Urmia University of Medical Sciences, Iran. This is a longitudinal study, which was conducted in academic year 2007-2008 at the Urmia University of Medical Sciences. In total, 173 students were selected as sample size, among various courses. SERVQUAL questionnaire was used as instrument. Descriptive statistics following by Friedman and Wilcoxon tests were used to determining significance of quality gap between five dimensions and to evaluate significant gap between student perceptions and their expectations, respectively. Spearman test was also used to determine the relationship between dimensions. In overall, 80% of educational expectations were not meet; there was a negative gap at all phrases and dimensions and the gap was more negative for educational experts (-1.45 ± 0.89) compared to teachers (-0.97 ± 0.97). The highest gap for teachers was in empathy dimension (-1.11 ± 1.51), while for experts it was in assurance dimension (-1.58 ± 1). Existences of gap in dimensions indicated that expectations of students are not met and it indicates their dissatisfaction, and thus it is a necessity for improvement in all dimensions.

  2. Primary care providers’ perspectives on discontinuing prostate cancer screening

    PubMed Central

    Pollack, Craig E.; Platz, Elizabeth A.; Bhavsar, Nrupen A.; Noronha, Gary; Green, Gene E.; Chen, Sean; Carter, H. Ballentine

    2012-01-01

    Background Clinical guidelines recommend against routine prostate specific antigen (PSA) screening in older men and those with lower life expectancies. We examined providers’ decision-making regarding discontinuing PSA screening. Methods We administered a survey of primary providers from a large, university-affiliated primary care practice. Providers were asked about their current screening practices, factors that influence their decision to discontinue screening, and barriers to discontinuing screening. Bivariate and multivariable logistic regression analyses were used to examine whether taking age and/or life expectancy into account and barriers to discontinuing were associated with clinician characteristics and practice styles. Results 88.7% of providers participated in the survey (125 out of 141). Over half (59.3%) took both age and life expectancy into account whereas 12.2% did not consider either in their decisions to discontinue PSA screening. Providers varied with the age they typically stop screening and majority (66.4%) report difficulty in assessing life expectancy. Taking patient age and life expectancy into account was not associated with provider characteristics or practice styles. The most frequently cited barriers to discontinuing PSA screening were patient expectation (74.4%) and time constraints (66.4%). Black providers were significantly less likely than non-black providers to endorse barriers related to time constraints and clinical uncertainty, though these results are limited by the small sample size of black providers. Conclusion Though age and life expectancy often figure prominently in decisions to employ screening, providers face multiple barriers to discontinue PSA routine screening, PMID:22517310

  3. Mind over platter: pre-meal planning and the control of meal size in humans.

    PubMed

    Brunstrom, J M

    2014-07-01

    It is widely accepted that meal size is governed by psychological and physiological processes that generate fullness towards the end of a meal. However, observations of natural eating behaviour suggest that this preoccupation with within-meal events may be misplaced and that the role of immediate post-ingestive feedback (for example, gastric stretch) has been overstated. This review considers the proposition that the locus of control is more likely to be expressed in decisions about portion size, before a meal begins. Consistent with this idea, we have discovered that people are extremely adept at estimating the 'expected satiety' and 'expected satiation' of different foods. These expectations are learned over time and they are highly correlated with the number of calories that end up on our plate. Indeed, across a range of foods, the large variation in expected satiety/satiation may be a more important determinant of meal size than relatively subtle differences in palatability. Building on related advances, it would also appear that memory for portion size has an important role in generating satiety after a meal has been consumed. Together, these findings expose the importance of planning and episodic memory in the control of appetite and food intake in humans.

  4. Classification of breast tumour using electrical impedance and machine learning techniques.

    PubMed

    Al Amin, Abdullah; Parvin, Shahnaj; Kadir, M A; Tahmid, Tasmia; Alam, S Kaisar; Siddique-e Rabbani, K

    2014-06-01

    When a breast lump is detected through palpation, mammography or ultrasonography, the final test for characterization of the tumour, whether it is malignant or benign, is biopsy. This is invasive and carries hazards associated with any surgical procedures. The present work was undertaken to study the feasibility for such characterization using non-invasive electrical impedance measurements and machine learning techniques. Because of changes in cell morphology of malignant and benign tumours, changes are expected in impedance at a fixed frequency, and versus frequency of measurement. Tetrapolar impedance measurement (TPIM) using four electrodes at the corners of a square region of sides 4 cm was used for zone localization. Data of impedance in two orthogonal directions, measured at 5 and 200 kHz from 19 subjects, and their respective slopes with frequency were subjected to machine learning procedures through the use of feature plots. These patients had single or multiple tumours of various types in one or both breasts, and four of them had malignant tumours, as diagnosed by core biopsy. Although size and depth of the tumours are expected to affect the measurements, this preliminary work ignored these effects. Selecting 12 features from the above measurements, feature plots were drawn for the 19 patients, which displayed considerable overlap between malignant and benign cases. However, based on observed qualitative trend of the measured values, when all the feature values were divided by respective ages, the two types of tumours separated out reasonably well. Using K-NN classification method the results obtained are, positive prediction value: 60%, negative prediction value: 93%, sensitivity: 75%, specificity: 87% and efficacy: 84%, which are very good for such a test on a small sample size. Study on a larger sample is expected to give confidence in this technique, and further improvement of the technique may have the ability to replace biopsy.

  5. On the Structure of Cortical Microcircuits Inferred from Small Sample Sizes.

    PubMed

    Vegué, Marina; Perin, Rodrigo; Roxin, Alex

    2017-08-30

    The structure in cortical microcircuits deviates from what would be expected in a purely random network, which has been seen as evidence of clustering. To address this issue, we sought to reproduce the nonrandom features of cortical circuits by considering several distinct classes of network topology, including clustered networks, networks with distance-dependent connectivity, and those with broad degree distributions. To our surprise, we found that all of these qualitatively distinct topologies could account equally well for all reported nonrandom features despite being easily distinguishable from one another at the network level. This apparent paradox was a consequence of estimating network properties given only small sample sizes. In other words, networks that differ markedly in their global structure can look quite similar locally. This makes inferring network structure from small sample sizes, a necessity given the technical difficulty inherent in simultaneous intracellular recordings, problematic. We found that a network statistic called the sample degree correlation (SDC) overcomes this difficulty. The SDC depends only on parameters that can be estimated reliably given small sample sizes and is an accurate fingerprint of every topological family. We applied the SDC criterion to data from rat visual and somatosensory cortex and discovered that the connectivity was not consistent with any of these main topological classes. However, we were able to fit the experimental data with a more general network class, of which all previous topologies were special cases. The resulting network topology could be interpreted as a combination of physical spatial dependence and nonspatial, hierarchical clustering. SIGNIFICANCE STATEMENT The connectivity of cortical microcircuits exhibits features that are inconsistent with a simple random network. Here, we show that several classes of network models can account for this nonrandom structure despite qualitative differences in their global properties. This apparent paradox is a consequence of the small numbers of simultaneously recorded neurons in experiment: when inferred via small sample sizes, many networks may be indistinguishable despite being globally distinct. We develop a connectivity measure that successfully classifies networks even when estimated locally with a few neurons at a time. We show that data from rat cortex is consistent with a network in which the likelihood of a connection between neurons depends on spatial distance and on nonspatial, asymmetric clustering. Copyright © 2017 the authors 0270-6474/17/378498-13$15.00/0.

  6. Application of lot quality assurance sampling for leprosy elimination monitoring--examination of some critical factors.

    PubMed

    Gupte, M D; Murthy, B N; Mahmood, K; Meeralakshmi, S; Nagaraju, B; Prabhakaran, R

    2004-04-01

    The concept of elimination of an infectious disease is different from eradication and in a way from control as well. In disease elimination programmes the desired reduced level of prevalence is set up as the target to be achieved in a practical time frame. Elimination can be considered in the context of national or regional levels. Prevalence levels depend on occurrence of new cases and thus could remain fluctuating. There are no ready pragmatic methods to monitor the progress of leprosy elimination programmes. We therefore tried to explore newer methods to answer these demands. With the lowering of prevalence of leprosy to the desired level of 1 case per 10000 population at the global level, the programme administrators' concern will be shifted to smaller areas e.g. national and sub-national levels. For monitoring this situation, we earlier observed that lot quality assurance sampling (LQAS), a quality control tool in industry was useful in the initially high endemic areas. However, critical factors such as geographical distribution of cases and adoption of cluster sampling design instead of simple random sampling design deserve attention before LQAS could generally be recommended. The present exercise was aimed at validating applicability of LQAS, and adopting these modifications for monitoring leprosy elimination in Tamil Nadu state, which was highly endemic for leprosy. A representative sample of 64000 people drawn from eight districts of Tamil Nadu state, India, with maximum allowable number of 25 cases was considered, using LQAS methodology to test whether leprosy prevalence was at or below 7 per 10000 population. Expected number of cases for each district was obtained assuming Poisson distribution. Goodness of fit for the observed and expected cases (closeness of the expected number of cases to those observed) was tested through chi(2). Enhancing factor (design effect) for sample size was obtained by computing the intraclass correlation. The survey actually covered a population of 62157 individuals, of whom 56469 (90.8%) were examined. Ninety-six cases were detected and this number far exceeded the critical value of 25. The number of cases for each district and the number of cases in the entire surveyed area both followed Poisson distribution. The intraclass correlation coefficients were close to zero and the design effect was observed to be close to one. Based on the LQAS exercises leprosy prevalence in the state of Tamil Nadu in India was above 7 per 10000. LQAS method using clusters was validated for monitoring leprosy elimination in high endemic areas. Use of cluster sampling makes this method further useful as a rapid assessment procedure. This method needs to be tested for its applicability in moderate and low endemic areas, where the sample size may need increasing. It is further possible to consider LQAS as a monitoring tool for elimination programmes with respect to other disease conditions.

  7. Bayesian methods for the design and interpretation of clinical trials in very rare diseases

    PubMed Central

    Hampson, Lisa V; Whitehead, John; Eleftheriou, Despina; Brogan, Paul

    2014-01-01

    This paper considers the design and interpretation of clinical trials comparing treatments for conditions so rare that worldwide recruitment efforts are likely to yield total sample sizes of 50 or fewer, even when patients are recruited over several years. For such studies, the sample size needed to meet a conventional frequentist power requirement is clearly infeasible. Rather, the expectation of any such trial has to be limited to the generation of an improved understanding of treatment options. We propose a Bayesian approach for the conduct of rare-disease trials comparing an experimental treatment with a control where patient responses are classified as a success or failure. A systematic elicitation from clinicians of their beliefs concerning treatment efficacy is used to establish Bayesian priors for unknown model parameters. The process of determining the prior is described, including the possibility of formally considering results from related trials. As sample sizes are small, it is possible to compute all possible posterior distributions of the two success rates. A number of allocation ratios between the two treatment groups can be considered with a view to maximising the prior probability that the trial concludes recommending the new treatment when in fact it is non-inferior to control. Consideration of the extent to which opinion can be changed, even by data from the best feasible design, can help to determine whether such a trial is worthwhile. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24957522

  8. Association of body mass with price of bushmeat in Nigeria and Cameroon.

    PubMed

    Macdonald, David W; Johnson, Paul J; Albrechtsen, Lise; Dutton, Adam; Seymour, Sarah; Dupain, Jef; Hall, Amy; Fa, John E

    2011-12-01

    Spatially extensive patterns of bushmeat extraction (and the processes underlying these patterns) have not been explored. We used data from a large sample (n= 87) of bushmeat trading points in urban and rural localities in Nigeria and Cameroon to explore extraction patterns at a regional level. In 7,594 sample days, we observed 61,267 transactions involving whole carcasses. Rural and urban trading points differed in species for sale and in meat condition (fresh or smoked). Carcass price was principally associated with body mass, with little evidence that taxonomic group (primate, rodent, ungulate, or mammalian carnivore) affected price. Moreover, meat condition was not consistently associated with price. However, some individual species were more expensive throughout the region than would be expected for their size. Prices were weakly positively correlated with human settlement size and were highest in urban areas. Supply did not increase proportionally as human settlement size increased, such that per capita supply was significantly lower in urban centers than in rural areas. Policy options, including banning hunting of more vulnerable species (those that have low reproductive rates), may help to conserve some species consumed as bushmeat because carcass prices indicate that faster breeding, and therefore the more sustainable species, may be substituted and readily accepted by consumers. ©2011 Society for Conservation Biology.

  9. Patients' Expectations and Perceptions of Service Quality in the Selected Hospitals.

    PubMed

    Nadi, Aliasghar; Shojaee, Jalil; Abedi, Ghassem; Siamian, Hasan; Abedini, Ehsan; Rostami, Farideh

    2016-04-01

    Hospital's success depends on patients' expectations, perceptions, and judgment on the quality of services provided by hospitals. This study was conducted to assess the patients' perceptions and expectations from the quality of inpatient health care in Vali-Asr hospital, Ghaemshahr, and Imam Khomeini and Shafa Hospitals, Sari. This study is applied regarding the objective of the study. Considering the research methodology, it is a descriptive - analytical study. The sample of this study consists of 600 patients with at least 24 hours of being hospitalized in internal, surgery, women, and children sectors of Vali-Asr, Ghaemshahr, Imam Khomeini, and Shafa Hospitals. Using random sampling method, the classifications relevant to the size of each class were selected. The data required was collected through the standard SERVQUAL questionnaire and then it was analyzed using the SPSS software. The overall mean value and standard deviation of expectations were equal to 10.4 and 28, respectively. The mean value for the field of perception was 69.2 and the relevant standard deviation was 26. In terms of patients and hospital visits in concrete cases, the highest priority is related to empathy. The second priority is related to physical appearance, the third priority is related to responsiveness, the fourth priority is related to assurance, and the lowest priority is related to the reliability of the SERVQUAL approach. Examining the gap between patients' perceptions and expectations, the widest gap was observed in the Vali-Asr Hospital with the mean and SD (-92.0±39.0) and the lowest gap was observed in Shafa Hospital with the mean value of (-39.9±44.0). According to The Kruskal-Wallis test, the difference observed in these three hospitals were significant. The results showed that patients' expectations had not been met in any of the examined dimensions and their consent has not been achieved. It seemed that necessary for managers and relevant authorities to plan and pay special attention to this important issue.

  10. Chefs' opinions of restaurant portion sizes.

    PubMed

    Condrasky, Marge; Ledikwe, Jenny H; Flood, Julie E; Rolls, Barbara J

    2007-08-01

    The objectives were to determine who establishes restaurant portion sizes and factors that influence these decisions, and to examine chefs' opinions regarding portion size, nutrition information, and weight management. A survey was distributed to chefs to obtain information about who is responsible for determining restaurant portion sizes, factors influencing restaurant portion sizes, what food portion sizes are being served in restaurants, and chefs' opinions regarding nutrition information, health, and body weight. The final sample consisted of 300 chefs attending various culinary meetings. Executive chefs were identified as being primarily responsible for establishing portion sizes served in restaurants. Factors reported to have a strong influence on restaurant portion sizes included presentation of foods, food cost, and customer expectations. While 76% of chefs thought that they served "regular" portions, the actual portions of steak and pasta they reported serving were 2 to 4 times larger than serving sizes recommended by the U.S government. Chefs indicated that they believe that the amount of food served influences how much patrons consume and that large portions are a problem for weight control, but their opinions were mixed regarding whether it is the customer's responsibility to eat an appropriate amount when served a large portion of food. Portion size is a key determinant of energy intake, and the results from this study suggest that cultural norms and economic value strongly influence the determination of restaurant portion sizes. Strategies are needed to encourage chefs to provide and promote portions that are appropriate for customers' energy requirements.

  11. MicrOmega: a VIS/NIR hyperspectral microscope for in situ analysis in space

    NASA Astrophysics Data System (ADS)

    Leroi, V.; Bibring, J. P.; Berthé, M.

    2008-07-01

    MicrOmega is an ultra miniaturized spectral microscope for in situ analysis of samples. It is composed of 2 microscopes: one with a spatial sampling of 5 μm, working in 4 color in the visible range and one NIR hyperspectral microscope in the spectral range 0.9-4 μm with a spatial sampling of 20 μm per pixel (described in this paper). MicrOmega/NIR illuminates and images samples a few mm in size and acquires the NIR spectrum of each resolved pixel in up to 600 contiguous spectral channels. The goal of this instrument is to analyse in situ the composition of collected samples at almost their grain size scale, in a non destructive way. It should be among the first set of instruments who will analyse the sample and enable other complementary analyses to be performed on it. With the spectral range and resolution chosen, a wide variety of constituents can be identified: minerals, such as pyroxene and olivine, ferric oxides, hydrated phyllosilicates, sulfates and carbonates; ices and organics. The composition of the various phases within a given sample is a critical record of its formation and evolution. Coupled to the mapping information, it provides unique clues to describe the history of the parent body. In particular, the capability to identify hydrated grains and to characterize their adjacent phases has a huge potential in the search for potential bio-relics. We will present the major instrumental principles and specifications of MicrOmega/NIR, and its expected performances in particular for the ESA/ExoMars Mission.

  12. Documentation of particle-size analyzer time series, and discrete suspended-sediment and bed-sediment sample data collection, Niobrara River near Spencer, Nebraska, October 2014

    USGS Publications Warehouse

    Schaepe, Nathaniel J.; Coleman, Anthony M.; Zelt, Ronald B.

    2018-04-06

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Army Corps of Engineers, monitored a sediment release by Nebraska Public Power District from Spencer Dam located on the Niobrara River near Spencer, Nebraska, during the fall of 2014. The accumulated sediment behind Spencer Dam ordinarily is released semiannually; however, the spring 2014 release was postponed until the fall. Because of the postponement, the scheduled fall sediment release would consist of a larger volume of sediment. The larger than normal sediment release expected in fall 2014 provided an opportunity for the USGS and U.S. Army Corps of Engineers to improve the understanding of sediment transport during reservoir sediment releases. A primary objective was to collect continuous suspended-sediment data during the first days of the sediment release to document rapid changes in sediment concentrations. For this purpose, the USGS installed a laser-diffraction particle-size analyzer at a site near the outflow of the dam to collect continuous suspended-sediment data. The laser-diffraction particle-size analyzer measured volumetric particle concentration and particle-size distribution from October 1 to 2 (pre-sediment release) and October 5 to 9 (during sediment release). Additionally, the USGS manually collected discrete suspended-sediment and bed-sediment samples before, during, and after the sediment release. Samples were collected at two sites upstream from Spencer Dam and at three bridges downstream from Spencer Dam. The resulting datasets and basic metadata associated with the datasets were published as a data release; this report provides additional documentation about the data collection methods and the quality of the data.

  13. Calibrations and Comparisons of Aerosol Spectrometers linking Ground and Airborne Measurements

    NASA Astrophysics Data System (ADS)

    Williamson, C.; Brock, C. A.; Erdesz, F.

    2015-12-01

    The nucleation-mode aerosol size spectrometer (NMASS), a fast-time response instrument measuring aerosol size distributions between 5 and 60nm, is to sample in the boundary layer and free troposphere on NASA's Atmospheric Tomography mission (ATom), providing contiguous data with global coverage in all four seasons. In preparation for this the NMASS is calibrated for the expected flight conditions and compatibility studies are made with ground-based instrumentation. The NMASS is comprised of 5 parallel condensation particle counters (CPCs) using perfluoro-tributylamine as a working fluid. Understanding the variation of CPC counting efficiencies with respect to the chemical composition of the sample is important for accurate data analysis and can be used to give indirect information about sample chemical composition. This variation is strongly dependent on the working fluid. The absolute responses and associated variations of the NMASS to ammonium sulfate and limonene ozonolysis products, compounds pertinent to the composition of particles nucleated in the free troposphere and boundary later, are compared to those of butanol, diethylene-glycol and water based CPCs, which are more commonly used in ground-based measurements. While fast time-response is key to measuring aerosol size distributions on flights, high size-resolution is often prioritized for ground-based measurements, and so a scanning mobility particle sizer (SMPS) is commonly used. Inter-comparison between NMASS and SMPS data is non-trivial because of the different working principles and resolutions of the instruments and yet it is vital, for example, for understanding the sources of particles observed during flights and the global relevance of phenomena observed from field stations and in chambers. We report compatibility studies on inversions of data from the SMPS and NMASS, evaluating temporal and spatial resolution and sources of uncertainty.

  14. Evaluation of Low-Gravity Smoke Particulate for Spacecraft Fire Detection

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Mulholland George; Meyer, Marit; Yuan, Zeng guang; Cleary, Thomas; Yang, Jiann; Greenberg, Paul; Bryg, Victoria

    2013-01-01

    Tests were conducted on the International Space Station to evaluate the smoke particulate size from materials and conditions that are typical of those expected in spacecraft fires. Five different materials representative of those found in spacecraft (Teflon, Kapton, cotton, silicone rubber and Pyrell) were heated to temperatures below the ignition point with conditions controlled to provide repeatable sample surface temperatures and air flow. The air flow past the sample during the heating period ranged from quiescent to 8 cm/s. The effective transport time to the measurement instruments was varied from 11 to 800 seconds to simulate different smoke transport conditions in spacecraft. The resultant aerosol was evaluated by three instruments which measured different moments of the particle size distribution. These moment diagnostics were used to determine the particle number concentration (zeroth moment), the diameter concentration (first moment), and the mass concentration (third moment). These statistics were combined to determine the diameter of average mass and the count mean diameter and by assuming a log-normal distribution, the geometric mean diameter and the geometric standard deviations were also calculated. Smoke particle samples were collected on TEM grids using a thermal precipitator for post flight analysis. The TEM grids were analyzed to determine the particle morphology and shape parameters. The different materials produced particles with significantly different morphologies. Overall the majority of the average smoke particle sizes were found to be in the 200 to 400 nanometer range with the quiescent cases and the cases with increased transport time typically producing with substantially larger particles. The results varied between materials but the smoke particles produced in low gravity were typically twice the size of particles produced in normal gravity. These results can be used to establish design requirements for future spacecraft smoke detectors.

  15. Rheology of ice I at low stress and elevated confining pressure

    USGS Publications Warehouse

    Durham, W.B.; Stern, L.A.; Kirby, S.H.

    2001-01-01

    Triaxial compression testing of pure, polycrystalline water ice I at conditions relevant to planetary interiors and near-surface environments (differential stresses 0.45 to 10 MPa, temperatures 200 to 250 K, confining pressure 50 MPa) reveals that a complex variety of rheologies and grain structures may exist for ice and that rheology of ice appears to depend strongly on the grain structures. The creep of polycrystalline ice I with average grain size of 0.25 mm and larger is consistent with previously published dislocation creep laws, which are now extended to strain rates as low as 2 ?? 10-8s-1. When ice I is reduced to very fine and uniform grain size by rapid pressure release from the ice II stability field, the rheology changes dramatically. At 200 and 220 K the rheology matches the grain-size-sensitive rheology measured by Goldsby and Kohlstedt [1997, this issue] at 1 atm. This finding dispels concerns that the Goldsby and Kohlstedt results were influenced by mechanisms such as microfracturing and cavitation, processes not expected to operate at elevated pressures in planetary interiors. At 233 K and above, grain growth causes the fine-grained ice to become more creep resistant. Scanning electron microscopy investigation of some of these deformed samples shows that grains have markedly coarsened and the strain hardening can be modeled by normal grain growth and the Goldsby and Kohlstedt rheology. Several samples also displayed very heterogeneous grain sizes and high aspect ratio grain shapes. Grain-size-sensitive creep and dislocation creep coincidentally contribute roughly equal amounts of strain rate at conditions of stress, temperature, and grain size that are typical of terrestrial and planetary settings, so modeling ice dynamics in these settings must include both mechanisms. Copyright 2001 by the American Geophysical Union.

  16. The near-infrared radius-luminosity relationship for active galactic nuclei

    NASA Astrophysics Data System (ADS)

    Landt, Hermine; Bentz, Misty C.; Peterson, Bradley M.; Elvis, Martin; Ward, Martin J.; Korista, Kirk T.; Karovska, Margarita

    2011-05-01

    Black hole masses for samples of active galactic nuclei (AGNs) are currently estimated from single-epoch optical spectra. In particular, the size of the broad-line emitting region needed to compute the black hole mass is derived from the optical or ultraviolet continuum luminosity. Here we consider the relationship between the broad-line region size, R, and the near-infrared (near-IR) AGN continuum luminosity, L, as the near-IR continuum suffers less dust extinction than at shorter wavelengths and the prospects for separating the AGN continuum from host-galaxy starlight are better in the near-IR than in the optical. For a relationship of the form R∝Lα, we obtain for a sample of 14 reverberation-mapped AGN a best-fitting slope of α= 0.5 ± 0.1, which is consistent with the slope of the relationship in the optical band and with the value of 0.5 naïvely expected from photoionization theory. Black hole masses can then be estimated from the near-IR virial product, which is calculated using the strong and unblended Paschen broad emission lines (Paα or Paβ).

  17. The reliability and stability of visual working memory capacity.

    PubMed

    Xu, Z; Adam, K C S; Fang, X; Vogel, E K

    2018-04-01

    Because of the central role of working memory capacity in cognition, many studies have used short measures of working memory capacity to examine its relationship to other domains. Here, we measured the reliability and stability of visual working memory capacity, measured using a single-probe change detection task. In Experiment 1, the participants (N = 135) completed a large number of trials of a change detection task (540 in total, 180 each of set sizes 4, 6, and 8). With large numbers of both trials and participants, reliability estimates were high (α > .9). We then used an iterative down-sampling procedure to create a look-up table for expected reliability in experiments with small sample sizes. In Experiment 2, the participants (N = 79) completed 31 sessions of single-probe change detection. The first 30 sessions took place over 30 consecutive days, and the last session took place 30 days later. This unprecedented number of sessions allowed us to examine the effects of practice on stability and internal reliability. Even after much practice, individual differences were stable over time (average between-session r = .76).

  18. Power analysis to detect treatment effect in longitudinal studies with heterogeneous errors and incomplete data.

    PubMed

    Vallejo, Guillermo; Ato, Manuel; Fernández García, Paula; Livacic Rojas, Pablo E; Tuero Herrero, Ellián

    2016-08-01

     S. Usami (2014) describes a method to realistically determine sample size in longitudinal research using a multilevel model. The present research extends the aforementioned work to situations where it is likely that the assumption of homogeneity of the errors across groups is not met and the error term does not follow a scaled identity covariance structure.   For this purpose, we followed a procedure based on transforming the variance components of the linear growth model and the parameter related to the treatment effect into specific and easily understandable indices. At the same time, we provide the appropriate statistical machinery for researchers to use when data loss is unavoidable, and changes in the expected value of the observed responses are not linear.   The empirical powers based on unknown variance components were virtually the same as the theoretical powers derived from the use of statistically processed indexes.   The main conclusion of the study is the accuracy of the proposed method to calculate sample size in the described situations with the stipulated power criteria.

  19. Specific absorption and backscatter coefficient signatures in southeastern Atlantic coastal waters

    NASA Astrophysics Data System (ADS)

    Bostater, Charles R., Jr.

    1998-12-01

    Measurements of natural water samples in the field and laboratory of hyperspectral signatures of total absorption and reflectance were obtained using long pathlength absorption systems (50 cm pathlength). Water was sampled in Indian River Lagoon, Banana River and Port Canaveral, Florida. Stations were also occupied in near coastal waters out to the edge of the Gulf Stream in the vicinity of Kennedy Space Center, Florida and estuarine waters along Port Royal Sound and along the Beaufort River tidal area in South Carolina. The measurements were utilized to calculate natural water specific absorption, total backscatter and specific backscatter optical signatures. The resulting optical cross section signatures suggest different models are needed for the different water types and that the common linear model may only appropriate for coastal and oceanic water types. Mean particle size estimates based on the optical cross section, suggest as expected, that particle size of oceanic particles are smaller than more turbid water types. The data discussed and presented are necessary for remote sensing applications of sensors as well as for development and inversion of remote sensing algorithms.

  20. The influence of maximum running speed on eye size: a test of Leuckart's Law in mammals.

    PubMed

    Heard-Booth, Amber N; Kirk, E Christopher

    2012-06-01

    Vertebrate eye size is influenced by many factors, including body or head size, diet, and activity pattern. Locomotor speed has also been suggested to influence eye size in a relationship known as Leuckart's Law. Leuckart's Law proposes that animals capable of achieving fast locomotor speeds require large eyes to enhance visual acuity and avoid collisions with environmental obstacles. The selective influence of rapid flight has been invoked to explain the relatively large eyes of birds, but Leuckart's Law remains untested in nonavian vertebrates. This study investigates the relationship between eye size and maximum running speed in a diverse sample of mammals. Measures of axial eye diameter, maximum running speed, and body mass were collected from the published literature for 50 species from 10 mammalian orders. This analysis reveals that absolute eye size is significantly positively correlated with maximum running speed in mammals. Moreover, the relationship between eye size and running speed remains significant when the potentially confounding effects of body mass and phylogeny are statistically controlled. The results of this analysis are therefore consistent with the expectations of Leuckart's Law and demonstrate that faster-moving mammals have larger eyes than their slower-moving close relatives. Accordingly, we conclude that maximum running speed is one of several key selective factors that have influenced the evolution of eye size in mammals. Copyright © 2012 Wiley Periodicals, Inc.

  1. O the Size Dependence of the Chemical Properties of Cloud Droplets: Exploratory Studies by Aircraft

    NASA Astrophysics Data System (ADS)

    Twohy, Cynthia H.

    1992-09-01

    Clouds play an important role in the climate of the earth and in the transport and transformation of chemical species, but many questions about clouds remain unanswered. In particular, the chemical properties of droplets may vary with droplet size, with potentially important consequences. The counterflow virtual impactor (CVI) separates droplets from interstitial particles and gases in a cloud and also can collect droplets in discrete size ranges. As such, the CVI is a useful tool for investigating the chemical components present in droplets of different sizes and their potential interactions with cloud processes. The purpose of this work is twofold. First, the sampling characteristics of the airborne CVI are investigated, using data from a variety of experiments. A thorough understanding of CVI properties is necessary in order to utilize the acquired data judiciously and effectively. Although the impaction characteristics of the CVI seem to be predictable by theory, the airborne instrument is subject to influences that may result in a reduced transmission efficiency for droplets, particularly if the inlet is not properly aligned. Ways to alleviate this problem are being investigated, but currently the imperfect sampling efficiency must be taken into account during data interpretation. Relationships between the physical and chemical properties of residual particles from droplets collected by the CVI and droplet size are then explored in both stratiform and cumulus clouds. The effects of various cloud processes and measurement limitations upon these relationships are discussed. In one study, chemical analysis of different -sized droplets sampled in stratiform clouds showed a dependence of chemical composition on droplet size, with larger droplets containing higher proportions of sodium than non-sea-salt sulfate and ammonium. Larger droplets were also associated with larger residual particles, as expected from simple cloud nucleation theory. In a study of marine cumulus clouds, the CVI was combined with a cloud condensation nucleus spectrometer to study the supersaturation spectra of residual particles from droplets. The median critical supersaturation of the droplet residual particles was consistently less than or equal to the median critical supersaturation of ambient particles except at cloud top, where residual particles exhibited a variety of critical supersaturations.

  2. A small, sensitive, light-weight, and disposable aerosol spectrometer for balloon and UAV applications

    NASA Astrophysics Data System (ADS)

    Fahey, D. W.; Gao, R.; Thornberry, T. D.; Rollins, D. W.; Schwarz, J. P.; Perring, A. E.

    2013-12-01

    In-situ sampling with particle size spectrometers is an important method to provide detailed size spectra for atmospheric aerosol in the troposphere and stratosphere. The spectra are essential for understanding aerosol sources and aerosol chemical evolution and removal, and for aerosol remote sensing validation. These spectrometers are usually bulky, heavy, and expensive, thereby limiting their application to specific airborne platforms. Here we report a new type of small and light-weight optical aerosol particle size spectrometer that is sensitive enough for many aerosol applications yet is inexpensive enough to be disposable. 3D printing is used for producing structural components for simplicity and low cost. Weighing less than 1 kg individually, we expect these spectrometers can be deployed successfully on small unmanned aircraft systems (UASs) and up to 25 km on weather balloons. Immediate applications include the study of Arctic haze using the Manta UAS, detection of the Asian Tropopause Aerosol Layer in the Asian monsoon system and SAGE III validation onboard weather balloons.

  3. Genealogies and ages of cultural traits: An application of the theory of duality to the research on cultural evolution.

    PubMed

    Kobayashi, Yutaka; Wakano, Joe Yuichiro; Ohtsuki, Hisashi

    2018-05-09

    A finite-population, discrete-generation model of cultural evolution is described, in which multiple discrete traits are transmitted independently. In this model, each newborn may inherit a trait from multiple cultural parents. Transmission fails with a positive probability unlike in population genetics. An ancestral process simulating the cultural genealogy of a sample of individuals is derived for this model. This ancestral process, denoted by M - , is shown to be dual to a process M + describing the change in the frequency of a trait. The age-frequency spectrum is defined as a two-dimensional array whose (i,k) element is the expected number of distinct cultural traits introduced k generations ago and now carried by i individuals in a sample of a particular size n. Numerical calculations reveal that the age-frequency spectrum and related metrics undergo a critical transition from a phase with a moderate number of young, rare traits to a phase with numerous very old, common traits when the expected number of cultural parents per individual exceeds one. It is shown that M + and M - converge to branching or deterministic processes, depending on the way population size tends to infinity, and these limiting processes bear some duality relationships. The critical behavior of the original processes M + and M - is explained in terms of a phase transition of the branching processes. Using the results of the limiting processes in combination, we derive analytical formulae that well approximate the age-frequency spectrum and also other metrics. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Theory and applications of a deterministic approximation to the coalescent model

    PubMed Central

    Jewett, Ethan M.; Rosenberg, Noah A.

    2014-01-01

    Under the coalescent model, the random number nt of lineages ancestral to a sample is nearly deterministic as a function of time when nt is moderate to large in value, and it is well approximated by its expectation E[nt]. In turn, this expectation is well approximated by simple deterministic functions that are easy to compute. Such deterministic functions have been applied to estimate allele age, effective population size, and genetic diversity, and they have been used to study properties of models of infectious disease dynamics. Although a number of simple approximations of E[nt] have been derived and applied to problems of population-genetic inference, the theoretical accuracy of the formulas and the inferences obtained using these approximations is not known, and the range of problems to which they can be applied is not well understood. Here, we demonstrate general procedures by which the approximation nt ≈ E[nt] can be used to reduce the computational complexity of coalescent formulas, and we show that the resulting approximations converge to their true values under simple assumptions. Such approximations provide alternatives to exact formulas that are computationally intractable or numerically unstable when the number of sampled lineages is moderate or large. We also extend an existing class of approximations of E[nt] to the case of multiple populations of time-varying size with migration among them. Our results facilitate the use of the deterministic approximation nt ≈ E[nt] for deriving functionally simple, computationally efficient, and numerically stable approximations of coalescent formulas under complicated demographic scenarios. PMID:24412419

  5. Environmental quality of Korean coasts as determined by modified Shannon-Wiener evenness proportion.

    PubMed

    Yoo, Jae-Won; Lee, Yong-Woo; Ruesink, Jennifer L; Lee, Chang-Gun; Kim, Chang-Soo; Park, Mi-Ra; Yoon, Kon-Tak; Hwang, In-Seo; Maeng, Jun-Ho; Rosenberg, Rutger; Hong, Jae-Sang

    2010-11-01

    The coast of the Korean peninsula experiences a range of human impacts, including pollution, shipping, reclamation, and aquaculture, that have motivated numerous local studies of macrobenthic organisms. In this paper, 1,492 subtidal stations were compiled from 23 studies (areas) to evaluate environmental quality on a broader scale. A common index in biomonitoring, Shannon-Wiener evenness proportion (SEP), could not incorporate azoic or single-species samples. This shortcoming was overcome by developing an inverse function of SEP (ISEP), which was positively correlated with independent measures of water quality available for nine sites and was not biased by the size of the sampling unit. Additionally, at Shihwa Dike, where samples were collected before and after reinstating a tidal connection with the ocean, ISEP values improved over time, as expected. Thus, it is now possible to assign Korean subtidal sites to seven ISEP "grades" and to use their values and trends to guide coastal management.

  6. Experimental determination of the frequency and field dependence of Specific Loss Power in Magnetic Fluid Hyperthermia

    NASA Astrophysics Data System (ADS)

    Cobianchi, M.; Guerrini, A.; Avolio, M.; Innocenti, C.; Corti, M.; Arosio, P.; Orsini, F.; Sangregorio, C.; Lascialfari, A.

    2017-12-01

    Magnetic nanoparticles are promising systems for biomedical applications and in particular for Magnetic Fluid Hyperthermia, a therapy that utilizes the heat released by such systems to damage tumor cells. We present an experimental study of the physical properties that influences the capability of heat release, i.e. the Specific Loss Power, SLP, of three biocompatible ferrofluid samples having a magnetic core of maghemite with different diameter d = 10.2, 14.6 and 19.7 nm. The SLP was measured as a function of frequency f and intensity H of the applied alternating magnetic field, and it turned out to depend on the core diameter, as expected. The results allowed us to highlight experimentally that the physical mechanism responsible for the heating is size-dependent and to establish, at applied constant frequency, the phenomenological functional relationship SLP = c·Hx, with 2 ≤ x<3 for all samples. The x-value depends on sample size and field frequency, here chosen in the typical range of operating magnetic hyperthermia devices. For the smallest sample, the effective relaxation time τeff ≈ 19.5 ns obtained from SLP data is in agreement with the value estimated from magnetization data, thus confirming the validity of the Linear Response Theory model for this system at properly chosen field intensity and frequency.

  7. Computational fluid dynamics (CFD) studies of a miniaturized dissolution system.

    PubMed

    Frenning, G; Ahnfelt, E; Sjögren, E; Lennernäs, H

    2017-04-15

    Dissolution testing is an important tool that has applications ranging from fundamental studies of drug-release mechanisms to quality control of the final product. The rate of release of the drug from the delivery system is known to be affected by hydrodynamics. In this study we used computational fluid dynamics to simulate and investigate the hydrodynamics in a novel miniaturized dissolution method for parenteral formulations. The dissolution method is based on a rotating disc system and uses a rotating sample reservoir which is separated from the remaining dissolution medium by a nylon screen. Sample reservoirs of two sizes were investigated (SR6 and SR8) and the hydrodynamic studies were performed at rotation rates of 100, 200 and 400rpm. The overall fluid flow was similar for all investigated cases, with a lateral upward spiraling motion and central downward motion in the form of a vortex to and through the screen. The simulations indicated that the exchange of dissolution medium between the sample reservoir and the remaining release medium was rapid for typical screens, for which almost complete mixing would be expected to occur within less than one minute at 400rpm. The local hydrodynamic conditions in the sample reservoirs depended on their size; SR8 appeared to be relatively more affected than SR6 by the resistance to liquid flow resulting from the screen. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Deep-Sea Macrobenthos Community Structure Proximal to the 2010 Macondo Well Blowout (2010-2011)

    NASA Astrophysics Data System (ADS)

    Briggs, K. B.; Brunner, C. A.; Yeager, K. M.

    2017-12-01

    Macrobenthos, polycyclic aromatic hydrocarbons (PAH) and sedimentary organic carbon (SOC) were sampled by multicorer in the vicinity of the Deepwater Horizon well head in October 2010 and 2011 to assess the effects of the April 2010 spill. Four stations were sampled east of the well head, four stations were sampled west of the well head, and "control" stations were sampled 58 and 65 km to the southwest. The macrobenthos community, as expected for continental slope/bathyal (water depth 1160-1760 m) benthos, was highly diverse. Polychaetes dominated at all stations, with either crustaceans or mollusks comprising the next most abundant taxon. The stations within five km of the well head showed slightly lower diversity than the more distal stations six months after the blowout. Compared to the "control" station, proportions of suspension feeders were generally depressed at stations with high PAH concentrations. Anomalously high values for abundance and diversity (and PAH) were found at one station 20 km west of the well head. The median body size of macrobenthos was negatively correlated with total PAH concentration, with 74% of the variation in median size explained by variation in PAH, when the anomalous station was excluded. Macrobenthos abundance did not appear to be influenced by SOC. Abundance and diversity of the macrobenthos was generally higher 18 months after the blowout, with measured PAH concentrations diminished to below background level.

  9. The global morphology of the solar wind interaction with comet Churyumov-Gerasimenko

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendis, D. A.; Horányi, M.

    2014-10-10

    The forthcoming Rosetta-Philae mission to comet 67P/Churyumov-Gerasimenko provides a novel opportunity to observe the variable nature of the solar wind interaction with a comet over an extended range of heliocentric distance. We use a simple analytical one-dimensional MHD model to estimate the sizes of the two most prominent features in the global structure of the solar wind interaction with a comet. When the heliocentric distance of the comet reaches d ≤ 1.51 AU, we expect a sharp shock to be observed, whose size would increase monotonically as the comet approaches the Sun, reaching a value ≅ 15, 000 km atmore » perihelion (d ≅ 1.29 AU). Upstream of the shock, we expect the velocity-space distribution of the picked up cometary ions to be essentially gyrotropic. A well-defined ionopause is predicted when d ≤1.61 AU, though its size is expected to be only ≅25 km at perihelion, and it is expected to be susceptible to the 'flute' instability due to its small size. Consequently, we expect the magnetic field to penetrate all the way to the surface of the nucleus. We conclude with a brief discussion of the response of the comet's plasma environment to fast temporal variations in the solar wind.« less

  10. Fulfilment of knowledge expectations among family members of patients undergoing arthroplasty: a European perspective.

    PubMed

    Sigurdardottir, Arun K; Leino-Kilpi, Helena; Charalambous, Andreas; Katajisto, Jouko; Stark, Åsa Johansson; Sourtzi, Panayota; Zabalegui, Adelaida; Valkeapää, Kirsi

    2015-12-01

    In the recovery process of arthroplasty patients, their family members play an important role due to short hospital stay and increased age of patients. Family members need to have knowledge to be able to support the patient. The aim of this study was to explore expected and received knowledge in family members of arthroplasty patients and describe the relationships between the differences in received and expected knowledge and background factors, country, information and control preferences and access to knowledge. The study was conducted in six European countries (Cyprus, Greece, Finland, Iceland, Spain and Sweden). The study design was cross-cultural, prospective and comparative with two measurement points: pre-operative and at discharge from hospital. Knowledge Expectations of significant other-scale and Krantz Health Opinion Survey were used before surgery and Received Knowledge of significant other-scale and Access to Knowledge at discharge. Patients undergoing elective hip or knee arthroplasty in seventeen hospitals were asked to identify one family member. The sample size was decided by power calculation. A total of 615 participants answered the questionnaires at both measurements. Family members perceived to receive less knowledge than they expected to have, most unfulfilled knowledge expectations were in the financial, social and experiential dimensions of knowledge. Seventy-four per cent of participants had unfulfilled knowledge expectations. Increased access to information from healthcare providers decreased the difference between received and expected knowledge. Compared to family members in southern Europe, those in the Nordic countries had more unfulfilled knowledge expectations and less access to information from healthcare providers. The evidence from this study highlights the need to involve the family members in the educational approach. © 2015 Nordic College of Caring Science.

  11. Association of Mass Media Communication with Contraceptive Use in Sub-Saharan Africa: A Meta-Analysis of Demographic and Health Surveys.

    PubMed

    Babalola, Stella; Figueroa, Maria-Elena; Krenn, Susan

    2017-11-01

    Literature abounds with evidence on the effectiveness of individual mass media interventions on contraceptive use and other health behaviors. There have been, however, very few studies summarizing effect sizes of mass media health communication campaigns in sub-Saharan Africa. In this study, we used meta-analytic techniques to pool data from 47 demographic and health surveys conducted between 2005 and 2015 in 31 sub-Saharan African countries and estimate the prevalence of exposure to family planning-related mass media communication. We also estimated the average effect size of exposure to mass media communication after adjusting for endogeneity. We performed meta-regression to assess the moderating role of selected variables on effect size. On average, 44% of women in sub-Saharan Africa were exposed to family planning-related mass media interventions in the year preceding the survey. Overall, exposure was associated with an effect size equivalent to an odds ratio of 1.93. More recent surveys demonstrated smaller effect sizes than earlier ones, while the effects were larger in lower contraceptive prevalence settings than in higher prevalence ones. The findings have implications for designing communication programs, setting expectations about communication impact, and guiding decisions about sample size estimation for mass media evaluation studies.

  12. Genetic consequences of polygyny and social structure in an Indian fruit bat, Cynopterus sphinx. II. Variance in male mating success and effective population size.

    PubMed

    Storz, J F; Bhat, H R; Kunz, T H

    2001-06-01

    Variance in reproductive success is a primary determinant of genetically effective population size (Ne), and thus has important implications for the role of genetic drift in the evolutionary dynamics of animal taxa characterized by polygynous mating systems. Here we report the results of a study designed to test the hypothesis that polygynous mating results in significantly reduced Ne in an age-structured population. This hypothesis was tested in a natural population of a harem-forming fruit bat, Cynopterus sphinx (Chiroptera: Pteropodidae), in western India. The influence of the mating system on the ratio of variance Ne to adult census number (N) was assessed using a mathematical model designed for age-structured populations that incorporated demographic and genetic data. Male mating success was assessed by means of direct and indirect paternity analysis using 10-locus microsatellite genotypes of adults and progeny from two consecutive breeding periods (n = 431 individually marked bats). Combined results from both analyses were used to infer the effective number of male parents in each breeding period. The relative proportion of successfully reproducing males and the size distribution of paternal sibships comprising each offspring cohort revealed an extremely high within-season variance in male mating success (up to 9.2 times higher than Poisson expectation). The resultant estimate of Ne/N for the C. sphinx study population was 0.42. As a result of polygynous mating, the predicted rate of drift (1/2Ne per generation) was 17.6% higher than expected from a Poisson distribution of male mating success. However, the estimated Ne/N was well within the 0.25-0.75 range expected for age-structured populations under normal demographic conditions. The life-history schedule of C. sphinx is characterized by a disproportionately short sexual maturation period scaled to adult life span. Consequently, the influence of polygynous mating on Ne/N is mitigated by the extensive overlap of generations. In C. sphinx, turnover of breeding males between seasons ensures a broader sampling of the adult male gamete pool than expected from the variance in mating success within a single breeding period.

  13. Effect of separate sampling on classification accuracy.

    PubMed

    Shahrokh Esfahani, Mohammad; Dougherty, Edward R

    2014-01-15

    Measurements are commonly taken from two phenotypes to build a classifier, where the number of data points from each class is predetermined, not random. In this 'separate sampling' scenario, the data cannot be used to estimate the class prior probabilities. Moreover, predetermined class sizes can severely degrade classifier performance, even for large samples. We employ simulations using both synthetic and real data to show the detrimental effect of separate sampling on a variety of classification rules. We establish propositions related to the effect on the expected classifier error owing to a sampling ratio different from the population class ratio. From these we derive a sample-based minimax sampling ratio and provide an algorithm for approximating it from the data. We also extend to arbitrary distributions the classical population-based Anderson linear discriminant analysis minimax sampling ratio derived from the discriminant form of the Bayes classifier. All the codes for synthetic data and real data examples are written in MATLAB. A function called mmratio, whose output is an approximation of the minimax sampling ratio of a given dataset, is also written in MATLAB. All the codes are available at: http://gsp.tamu.edu/Publications/supplementary/shahrokh13b.

  14. Study design and sampling intensity for demographic analyses of bear populations

    USGS Publications Warehouse

    Harris, R.B.; Schwartz, C.C.; Mace, R.D.; Haroldson, M.A.

    2011-01-01

    The rate of population change through time (??) is a fundamental element of a wildlife population's conservation status, yet estimating it with acceptable precision for bears is difficult. For studies that follow known (usually marked) bears, ?? can be estimated during some defined time by applying either life-table or matrix projection methods to estimates of individual vital rates. Usually however, confidence intervals surrounding the estimate are broader than one would like. Using an estimator suggested by Doak et al. (2005), we explored the precision to be expected in ?? from demographic analyses of typical grizzly (Ursus arctos) and American black (U. americanus) bear data sets. We also evaluated some trade-offs among vital rates in sampling strategies. Confidence intervals around ?? were more sensitive to adding to the duration of a short (e.g., 3 yrs) than a long (e.g., 10 yrs) study, and more sensitive to adding additional bears to studies with small (e.g., 10 adult females/yr) than large (e.g., 30 adult females/yr) sample sizes. Confidence intervals of ?? projected using process-only variance of vital rates were only slightly smaller than those projected using total variances of vital rates. Under sampling constraints typical of most bear studies, it may be more efficient to invest additional resources into monitoring recruitment and juvenile survival rates of females already a part of the study, than to simply increase the sample size of study females. ?? 2011 International Association for Bear Research and Management.

  15. Family size and expectations about housing in the later nineteenth century: three Yorkshire towns.

    PubMed

    Atkinson, Paul

    2011-01-01

    This article illustrates how cultural history can deepen the understanding of demographic change, presenting evidence about ways in which rising working-class expectations about appropriate living standards may have created additional pressures on the perceived costs of child-rearing. Among the key areas of family consumption, housing costs are selected for examination. It is shown that higher expectations about appropriate housing quality put pressure on family budgets, augmented by the rising cost of like-for-like housing. The discussion considers expectations about the size of the dwelling and attitudes to furnishing the home, and suggests that these rising expectations helped encourage family limitation. Existing accounts of the fertility decline which stress the role of rising expectations are often too generalised: this article illustrates what can be gained by adding detail and geographical variation.

  16. Visual context processing deficits in schizophrenia: effects of deafness and disorganization.

    PubMed

    Horton, Heather K; Silverstein, Steven M

    2011-07-01

    Visual illusions allow for strong tests of perceptual functioning. Perceptual impairments can produce superior task performance on certain tasks (i.e., more veridical perception), thereby avoiding generalized deficit confounds while tapping mechanisms that are largely outside of conscious control. Using a task based on the Ebbinghaus illusion, a perceptual phenomenon where the perceived size of a central target object is affected by the size of surrounding inducers, we tested hypotheses related to visual integration in deaf (n = 31) and hearing (n = 34) patients with schizophrenia. In past studies, psychiatrically healthy samples displayed increased visual integration relative to schizophrenia samples and thus were less able to correctly judge target sizes. Deafness, and especially the use of sign language, leads to heightened sensitivity to peripheral visual cues and increased sensitivity to visual context. Therefore, relative to hearing subjects, deaf subjects were expected to display increased context sensitivity (ie, a more normal illusion effect as evidenced by a decreased ability to correctly judge central target sizes). Confirming the hypothesis, deaf signers were significantly more sensitive to the illusion than nonsigning hearing patients. Moreover, an earlier age of sign language acquisition, higher levels of linguistic ability, and shorter illness duration were significantly related to increased context sensitivity. As predicted, disorganization was associated with reduced context sensitivity for all subjects. The primary implications of these data are that perceptual organization impairment in schizophrenia is plastic and that it is related to a broader failure in coordinating cognitive activity.

  17. Quality gap in primary health care services in Isfahan: women's perspective

    PubMed Central

    Sharifirad, Gholam R.; Shamsi, Mohsen; Pirzadeh, Asiyeh; Farzanegan, Parvin D.

    2012-01-01

    Background: Quality gap is the gap between client's understanding and expectations. The first step in removing this gap is to recognize client's understanding and expectations of the services. This study aimed to determine women's viewpoint of quality gap in primary health care centers of Isfahan. Materials and Methods: This cross-sectional study was conducted on women who came to primary health care centers in Isfahan city. Sample size was 1280 people. Service Quality was used to collect data including tangible dimensions, confidence, responsiveness, assurance and sympathy in providing services. Data were analyzed by t test and chi square test. Results: The results showed that women had controversy over all 5 dimensions. The least mean quality gap was seen in assurance (-11.08) and the highest mean quality gap was seen in tangible dimension (-14.41). The difference between women's viewpoint in all 5 dimensions was significant. (P < 0.05) Conclusion: Negative difference means clients’ expectations are much higher than their understanding of the current situation, so there is a large space to improve services and satisfy clients. PMID:23555148

  18. Classical and non-classical effective medium theories: New perspectives

    NASA Astrophysics Data System (ADS)

    Tsukerman, Igor

    2017-05-01

    Future research in electrodynamics of periodic electromagnetic composites (metamaterials) can be expected to produce sophisticated homogenization theories valid for any composition and size of the lattice cell. The paper outlines a promising path in that direction, leading to non-asymptotic and nonlocal homogenization models, and highlights aspects of homogenization that are often overlooked: the finite size of the sample and the role of interface boundaries. Classical theories (e.g. Clausius-Mossotti, Maxwell Garnett), while originally derived from a very different set of ideas, fit well into the proposed framework. Nonlocal effects can be included in the model, making an order-of-magnitude accuracy improvements possible. One future challenge is to determine what effective parameters can or cannot be obtained for a given set of constituents of a metamaterial lattice cell, thereby delineating the possible from the impossible in metamaterial design.

  19. Effect of Zn2+, Fe3+ and Cr3+ addition to hydroxyapatite for its application as an active constituent of sunscreens

    NASA Astrophysics Data System (ADS)

    de Araujo, T. S.; de Souza, S. O.; de Sousa, E. M. B.

    2010-11-01

    Biocompatible phosphate materials are used in different applications like bone and dental implants, drug delivery systems and others, but could also be applied in inorganic sunscreens. Using sunscreens is extremely necessary, because long time exposure to sun can cause skin cancer. In this work chemical precipitation method has been used to produce hydroxyapatite. Cr3+, Zn2+ and Fe3+ doped samples were characterized using powder X-Ray Diffraction (XRD) and Optical Absorption techniques. X-ray diffraction measurements confirmed the materials were in the expected crystalline structures. The crystallite size as measured from the X-ray pattern was 23-27 nm (±1). The absorption spectra in the ultraviolet and visible ranges indicate that appropriately doped and sized hydroxyapatite particles may have potential applications as active constituents of sunscreens.

  20. Neural Network Emulation of Reionization Simulations

    NASA Astrophysics Data System (ADS)

    Schmit, Claude J.; Pritchard, Jonathan R.

    2018-05-01

    Next generation radio experiments such as LOFAR, HERA and SKA are expected to probe the Epoch of Reionization and claim a first direct detection of the cosmic 21cm signal within the next decade. One of the major challenges for these experiments will be dealing with enormous incoming data volumes. Machine learning is key to increasing our data analysis efficiency. We consider the use of an artificial neural network to emulate 21cmFAST simulations and use it in a Bayesian parameter inference study. We then compare the network predictions to a direct evaluation of the EoR simulations and analyse the dependence of the results on the training set size. We find that the use of a training set of size 100 samples can recover the error contours of a full scale MCMC analysis which evaluates the model at each step.

  1. Asymptotic Distributions of Coalescence Times and Ancestral Lineage Numbers for Populations with Temporally Varying Size

    PubMed Central

    Chen, Hua; Chen, Kun

    2013-01-01

    The distributions of coalescence times and ancestral lineage numbers play an essential role in coalescent modeling and ancestral inference. Both exact distributions of coalescence times and ancestral lineage numbers are expressed as the sum of alternating series, and the terms in the series become numerically intractable for large samples. More computationally attractive are their asymptotic distributions, which were derived in Griffiths (1984) for populations with constant size. In this article, we derive the asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size. For a sample of size n, denote by Tm the mth coalescent time, when m + 1 lineages coalesce into m lineages, and An(t) the number of ancestral lineages at time t back from the current generation. Similar to the results in Griffiths (1984), the number of ancestral lineages, An(t), and the coalescence times, Tm, are asymptotically normal, with the mean and variance of these distributions depending on the population size function, N(t). At the very early stage of the coalescent, when t → 0, the number of coalesced lineages n − An(t) follows a Poisson distribution, and as m → n, n(n−1)Tm/2N(0) follows a gamma distribution. We demonstrate the accuracy of the asymptotic approximations by comparing to both exact distributions and coalescent simulations. Several applications of the theoretical results are also shown: deriving statistics related to the properties of gene genealogies, such as the time to the most recent common ancestor (TMRCA) and the total branch length (TBL) of the genealogy, and deriving the allele frequency spectrum for large genealogies. With the advent of genomic-level sequencing data for large samples, the asymptotic distributions are expected to have wide applications in theoretical and methodological development for population genetic inference. PMID:23666939

  2. Asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size.

    PubMed

    Chen, Hua; Chen, Kun

    2013-07-01

    The distributions of coalescence times and ancestral lineage numbers play an essential role in coalescent modeling and ancestral inference. Both exact distributions of coalescence times and ancestral lineage numbers are expressed as the sum of alternating series, and the terms in the series become numerically intractable for large samples. More computationally attractive are their asymptotic distributions, which were derived in Griffiths (1984) for populations with constant size. In this article, we derive the asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size. For a sample of size n, denote by Tm the mth coalescent time, when m + 1 lineages coalesce into m lineages, and An(t) the number of ancestral lineages at time t back from the current generation. Similar to the results in Griffiths (1984), the number of ancestral lineages, An(t), and the coalescence times, Tm, are asymptotically normal, with the mean and variance of these distributions depending on the population size function, N(t). At the very early stage of the coalescent, when t → 0, the number of coalesced lineages n - An(t) follows a Poisson distribution, and as m → n, $$n\\left(n-1\\right){T}_{m}/2N\\left(0\\right)$$ follows a gamma distribution. We demonstrate the accuracy of the asymptotic approximations by comparing to both exact distributions and coalescent simulations. Several applications of the theoretical results are also shown: deriving statistics related to the properties of gene genealogies, such as the time to the most recent common ancestor (TMRCA) and the total branch length (TBL) of the genealogy, and deriving the allele frequency spectrum for large genealogies. With the advent of genomic-level sequencing data for large samples, the asymptotic distributions are expected to have wide applications in theoretical and methodological development for population genetic inference.

  3. The Conditioning Influence of Organization Size Upon Managerial Practice

    ERIC Educational Resources Information Center

    Mahoney, Thomas A.; And Others

    1972-01-01

    Two dimensions of organization size -- unit size and size of parent organization -- are analyzed for independent and joint relationships with various dimensions of organizational behavior and managerial practice. Results suggest that managerial practices of delegation, staffing, and direction vary with size and moderate expected size influences…

  4. The BetaCage: Ultrasensitive Screener for Radioactive Backgrounds

    NASA Astrophysics Data System (ADS)

    Thompson, Michael; BetaCage Collaboration

    2017-09-01

    Rare event searches, such as dark matter detection and neutrinoless double beta decay, require screening of materials for backgrounds such as beta emission and alpha decaying isotopes. The BetaCage is a proposed ultra-sensitive time-projection chamber to screen for alpha-emitting and low energy beta-emitting (10-200 keV) contaminants. The expected sensitivity is 0.1 beta particles (perkeV -m2 - day) and 0.1 alpha particles (perm2 - day) , where the former will be limited by Compton scattering of external photons in the screening samples and the latter is expected to be signal-limited. The prototype BetaCage under commissioning at South Dakota School of Mines & Technology is filled with P10 gas (10% methane, 90% argon) in place of neon and is 40×40×20 cm in size. Details on design, construction and characterization will be presented.

  5. Ingestion of marine debris by the White-chinned Petrel (Procellaria aequinoctialis): Is it increasing over time off southern Brazil?

    PubMed

    Petry, Maria V; Benemann, Victória R F

    2017-04-15

    Seabirds are amongst the most affected organisms by plastic pollution worldwide. Ingestion of marine debris has been reported in at least 122 species, and owing to the increasing global production and persistence of these anthropogenic materials within the marine environment, it is expected to be a growing problem to the marine fauna. Here we report evidence of an increasing frequency in marine debris ingestion and a decrease in the amount of plastic pellets ingested by White-chinned Petrels attending south Brazilian waters during the last three decades. Future studies comprising large temporal scales and large sample sizes are needed to better understand the trends of marine debris ingestion by seabirds. We expect our findings to highlight the need for prevention policies and mitigation measures to reduce the amount of solid litter in the oceans. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Robust DNA Isolation and High-throughput Sequencing Library Construction for Herbarium Specimens.

    PubMed

    Saeidi, Saman; McKain, Michael R; Kellogg, Elizabeth A

    2018-03-08

    Herbaria are an invaluable source of plant material that can be used in a variety of biological studies. The use of herbarium specimens is associated with a number of challenges including sample preservation quality, degraded DNA, and destructive sampling of rare specimens. In order to more effectively use herbarium material in large sequencing projects, a dependable and scalable method of DNA isolation and library preparation is needed. This paper demonstrates a robust, beginning-to-end protocol for DNA isolation and high-throughput library construction from herbarium specimens that does not require modification for individual samples. This protocol is tailored for low quality dried plant material and takes advantage of existing methods by optimizing tissue grinding, modifying library size selection, and introducing an optional reamplification step for low yield libraries. Reamplification of low yield DNA libraries can rescue samples derived from irreplaceable and potentially valuable herbarium specimens, negating the need for additional destructive sampling and without introducing discernible sequencing bias for common phylogenetic applications. The protocol has been tested on hundreds of grass species, but is expected to be adaptable for use in other plant lineages after verification. This protocol can be limited by extremely degraded DNA, where fragments do not exist in the desired size range, and by secondary metabolites present in some plant material that inhibit clean DNA isolation. Overall, this protocol introduces a fast and comprehensive method that allows for DNA isolation and library preparation of 24 samples in less than 13 h, with only 8 h of active hands-on time with minimal modifications.

  7. Infant formula samples: perinatal sources and breast-feeding outcomes at 1 month postpartum.

    PubMed

    Thurston, Amanda; Bolin, Jocelyn H; Chezem, Jo Carol

    2013-01-01

    The purpose was to describe sources of infant formula samples during the perinatal period and assess their associations with breast-feeding outcomes at 1 month postpartum. Subjects included expectant mothers who anticipated breast-feeding at least 1 month. Infant feeding history and sources of formula samples were obtained at 1 month postpartum. Associations between sources and breast-feeding outcomes were assessed using partial correlation. Of the 61 subjects who initiated breast-feeding, most were white (87%), married (75%), college-educated (75%), and planned exclusive breast-feeding (82%). Forty-two subjects (69%) continued breast-feeding at 1 month postpartum. Subjects received formula samples from the hospital (n = 40; 66%), physician's office (n = 10; 16%), and mail (n = 41; 67%). There were no significant correlations between formula samples from the hospital, physician's office, and/or mail and any or exclusive breast-feeding at 1 month (P > .05). In addition to the hospital, a long-standing source of formula samples, mail was also frequently reported as a route for distribution. The lack of statistically significant associations between formula samples and any or exclusive breast-feeding at 1 month may be related to small sample size and unique characteristics of the group studied.

  8. Distribution of pesticide residues in soil and uncertainty of sampling.

    PubMed

    Suszter, Gabriela K; Ambrus, Árpád

    2017-08-03

    Pesticide residues were determined in about 120 soil cores taken randomly from the top 15 cm layer of two sunflower fields about 30 days after preemergence herbicide treatments. Samples were extracted with acetone-ethyl acetate mixture and the residues were determined with GC-TSD. Residues of dimethenamid, pendimethalin, and prometryn ranged from 0.005 to 2.97 mg/kg. Their relative standard deviations (CV) were between 0.66 and 1.13. The relative frequency distributions of residues in soil cores were very similar to those observed in root and tuber vegetables grown in pesticide treated soils. Based on all available information, a typical CV of 1.00 was estimated for pesticide residues in primary soil samples (soil cores). The corresponding expectable relative uncertainty of sampling is 20% when composite samples of size 25 are taken. To obtain a reliable estimate of the average residues in the top 15 cm layer of soil of a field up to 8 independent replicate random samples should be taken. To obtain better estimate of the actual residue level of the sampled filed would be marginal if larger number of samples were taken.

  9. The Bridge Between Real and Ideal: Students Perception on Quality Gap in Reality and Their Educational Expectations

    PubMed Central

    Nabilou, Bahram; Khorasani-Zavareh, Davoud

    2014-01-01

    Background: Studies in higher education indicated that students’ expectation for their educational services are not provided sufficiently, particularly in developing countries that implies on gap between the students perception on current situation and their expectations from educational services. Objectives: The aim of this study was to determine the gap between student perception and expectations of students in various levels of the undergraduate educational courses at Urmia University of Medical Sciences, Iran. Patients and Methods: This is a longitudinal study, which was conducted in academic year 2007-2008 at the Urmia University of Medical Sciences. In total, 173 students were selected as sample size, among various courses. SERVQUAL questionnaire was used as instrument. Descriptive statistics following by Friedman and Wilcoxon tests were used to determining significance of quality gap between five dimensions and to evaluate significant gap between student perceptions and their expectations, respectively. Spearman test was also used to determine the relationship between dimensions. Results: In overall, 80% of educational expectations were not meet; there was a negative gap at all phrases and dimensions and the gap was more negative for educational experts (-1.45 ± 0.89) compared to teachers (-0.97 ± 0.97). The highest gap for teachers was in empathy dimension (-1.11 ± 1.51), while for experts it was in assurance dimension (-1.58 ± 1). Conclusions: Existences of gap in dimensions indicated that expectations of students are not met and it indicates their dissatisfaction, and thus it is a necessity for improvement in all dimensions. PMID:25593712

  10. Land cover, more than monthly fire weather, drives fire-size distribution in Southern Québec forests: Implications for fire risk management

    PubMed Central

    Marchal, Jean; Cumming, Steve G.; McIntire, Eliot J. B.

    2017-01-01

    Fire activity in North American forests is expected to increase substantially with climate change. This would represent a growing risk to human settlements and industrial infrastructure proximal to forests, and to the forest products industry. We modelled fire size distributions in southern Québec as functions of fire weather and land cover, thus explicitly integrating some of the biotic interactions and feedbacks in a forest-wildfire system. We found that, contrary to expectations, land-cover and not fire weather was the primary driver of fire size in our study region. Fires were highly selective on fuel-type under a wide range of fire weather conditions: specifically, deciduous forest, lakes and to a lesser extent recently burned areas decreased the expected fire size in their vicinity compared to conifer forest. This has large implications for fire risk management in that fuels management could reduce fire risk over the long term. Our results imply, for example, that if 30% of a conifer-dominated landscape were converted to hardwoods, the probability of a given fire, occurring in that landscape under mean fire weather conditions, exceeding 100,000 ha would be reduced by a factor of 21. A similarly marked but slightly smaller effect size would be expected under extreme fire weather conditions. We attribute the decrease in expected fire size that occurs in recently burned areas to fuel availability limitations on fires spread. Because regenerating burned conifer stands often pass through a deciduous stage, this would also act as a negative biotic feedback whereby the occurrence of fires limits the size of nearby future for some period of time. Our parameter estimates imply that changes in vegetation flammability or fuel availability after fires would tend to counteract shifts in the fire size distribution favoring larger fires that are expected under climate warming. Ecological forecasts from models neglecting these feedbacks may markedly overestimate the consequences of climate warming on fire activity, and could be misleading. Assessments of vulnerability to climate change, and subsequent adaptation strategies, are directly dependent on integrated ecological forecasts. Thus, we stress the need to explicitly incorporate land-cover’s direct effects and feedbacks in simulation models of coupled climate–fire–fuels systems. PMID:28609467

  11. Land cover, more than monthly fire weather, drives fire-size distribution in Southern Québec forests: Implications for fire risk management.

    PubMed

    Marchal, Jean; Cumming, Steve G; McIntire, Eliot J B

    2017-01-01

    Fire activity in North American forests is expected to increase substantially with climate change. This would represent a growing risk to human settlements and industrial infrastructure proximal to forests, and to the forest products industry. We modelled fire size distributions in southern Québec as functions of fire weather and land cover, thus explicitly integrating some of the biotic interactions and feedbacks in a forest-wildfire system. We found that, contrary to expectations, land-cover and not fire weather was the primary driver of fire size in our study region. Fires were highly selective on fuel-type under a wide range of fire weather conditions: specifically, deciduous forest, lakes and to a lesser extent recently burned areas decreased the expected fire size in their vicinity compared to conifer forest. This has large implications for fire risk management in that fuels management could reduce fire risk over the long term. Our results imply, for example, that if 30% of a conifer-dominated landscape were converted to hardwoods, the probability of a given fire, occurring in that landscape under mean fire weather conditions, exceeding 100,000 ha would be reduced by a factor of 21. A similarly marked but slightly smaller effect size would be expected under extreme fire weather conditions. We attribute the decrease in expected fire size that occurs in recently burned areas to fuel availability limitations on fires spread. Because regenerating burned conifer stands often pass through a deciduous stage, this would also act as a negative biotic feedback whereby the occurrence of fires limits the size of nearby future for some period of time. Our parameter estimates imply that changes in vegetation flammability or fuel availability after fires would tend to counteract shifts in the fire size distribution favoring larger fires that are expected under climate warming. Ecological forecasts from models neglecting these feedbacks may markedly overestimate the consequences of climate warming on fire activity, and could be misleading. Assessments of vulnerability to climate change, and subsequent adaptation strategies, are directly dependent on integrated ecological forecasts. Thus, we stress the need to explicitly incorporate land-cover's direct effects and feedbacks in simulation models of coupled climate-fire-fuels systems.

  12. Non-parametric estimation of population size changes from the site frequency spectrum.

    PubMed

    Waltoft, Berit Lindum; Hobolth, Asger

    2018-06-11

    Changes in population size is a useful quantity for understanding the evolutionary history of a species. Genetic variation within a species can be summarized by the site frequency spectrum (SFS). For a sample of size n, the SFS is a vector of length n - 1 where entry i is the number of sites where the mutant base appears i times and the ancestral base appears n - i times. We present a new method, CubSFS, for estimating the changes in population size of a panmictic population from an observed SFS. First, we provide a straightforward proof for the expression of the expected site frequency spectrum depending only on the population size. Our derivation is based on an eigenvalue decomposition of the instantaneous coalescent rate matrix. Second, we solve the inverse problem of determining the changes in population size from an observed SFS. Our solution is based on a cubic spline for the population size. The cubic spline is determined by minimizing the weighted average of two terms, namely (i) the goodness of fit to the observed SFS, and (ii) a penalty term based on the smoothness of the changes. The weight is determined by cross-validation. The new method is validated on simulated demographic histories and applied on unfolded and folded SFS from 26 different human populations from the 1000 Genomes Project.

  13. Observational safety study of specific outcomes after trivalent cell culture seasonal influenza vaccination (Optaflu® ) among adults in THIN database of electronic UK primary healthcare records.

    PubMed

    Hall, Gillian C; Davies, Paul T G; Karim, M Yousuf; Haag, Mendel D M; O'Leary, Caroline

    2018-01-01

    To investigate the safety of trivalent seasonal influenza vaccine (TIVc) (Optaflu ® ), the first cell culture seasonal trivalent influenza vaccine available in Europe. Codes and unstructured text in adult electronic healthcare records (The Health Improvement Network) were searched for a TIVc brand name or batch number and possible outcomes within a 3 month pre- to 6 month post-TIVc exposure study period (2012-2015). The outcomes were severe allergic reactions, Bell's palsy, convulsions, demyelination, paresthesia, noninfectious encephalitis, neuritis (optic and brachial), vasculitis, inflammatory bowel disease, and thrombocytopenia. Risk periods were defined based on biologically plausible time frame postvaccination when an outcome caused by the vaccine might be expected to occur. Possible outcomes were adjudicated against outcome specific case definitions and a date of onset assigned by using electronic and other medical records. Observed (risk period) to expected (outside risk and preexposure periods) rate ratios, postexposure incidence, and plots of time from exposure to outcome were reported. Sixteen of 1011 events from 4578 exposures fulfilled a primary case definition and had a date of onset during the study period. Three were in observed time. The observed-to-expected rate ratios were (3.3, 95% CI 0.3, 31.7) for convulsions and (1.5, 95% CI 0.2, 14.9) for thrombocytopenia with 1 outcome each in observed time. There was 1 incident inflammatory bowel disease in observed, but none in expected, time. The small sample size restricts interpretation; however, no hypothesis of an increased risk of a study outcome was generated. Adjudication of events against case definitions to reduce misclassification of onset and outcomes allowed use of precise risk periods. KEY POINTS This observational study did not generate a hypothesis of an association between the first cell-culture seasonal influenza vaccination available in the European Union and any of the study outcomes (severe allergic reactions, Bell's palsy, convulsions, demyelination, paresthesia, noninfectious encephalitis, neuritis [optic and brachial], vasculitis, inflammatory bowel disease [IBD], and thrombocytopenia). The small sample size limits interpretation of the results. The review of each possible outcome identified from electronic healthcare records against case definitions was included to minimize misclassification of time and outcomes and allow the use of precise risk-periods in an observed-to-expected within cohort analysis. Plots of time from exposure to outcome were included to assess the risk windows. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Genetic analysis of Mexican Criollo cattle populations.

    PubMed

    Ulloa-Arvizu, R; Gayosso-Vázquez, A; Ramos-Kuri, M; Estrada, F J; Montaño, M; Alonso, R A

    2008-10-01

    The objective of this study was to evaluate the genetic structure of Mexican Criollo cattle populations using microsatellite genetic markers. DNA samples were collected from 168 animals from four Mexican Criollo cattle populations, geographically isolated in remote areas of Sierra Madre Occidental (West Highlands). Also were included samples from two breeds with Iberian origin: the fighting bull (n = 24) and the milking central American Criollo (n = 24) and one Asiatic breed: Guzerat (n = 32). Genetic analysis consisted of the estimation of the genetic diversity in each population by the allele number and the average expected heterozygosity found in nine microsatellite loci. Furthermore, genetic relationships among the populations were defined by their genetic distances. Our data shows that Mexican cattle populations have a relatively high level of genetic diversity based either on the mean number of alleles (10.2-13.6) and on the expected heterozygosity (0.71-0.85). The degree of observed homozygosity within the Criollo populations was remarkable and probably caused by inbreeding (reduced effective population size) possibly due to reproductive structure within populations. Our data shows that considerable genetic differentiation has been occurred among the Criollo cattle populations in different regions of Mexico.

  15. Equation of State and Electrical Conductivity of Helium at High Pressures and Temperatures

    NASA Astrophysics Data System (ADS)

    McWilliams, R. S.; Eggert, J. H.; Loubeyre, P.; Brygoo, S.; Collins, G.; Jeanloz, R.

    2004-12-01

    Helium, the second-most abundant element in the universe and giant planets, is expected to metallize at much higher pressures and temperatures than the most abundant element, hydrogen. The difference in chemical-bonding character, between insulator and metal, is expected to make hydrogen-helium mixtures immiscible throughout large fractions of planetary interiors, and therefore subject to gravitational separation contributing significantly to the internal dynamics of giant planets. Using laser-driven shock waves on samples pre-compressed in high-pressure cells, we have obtained the first measurements of optical reflectivity from the shock front in helium to pressures of 146 GPa. The reflectivity exceeds 5% above \\ensuremath{\\sim} 100 GPa, indicating high electrical conductivity. By varying the initial pressure (hence density) of the sample, we can access a much wider range of final pressure-temperature conditions than is possible in conventional Hugoniot experiments. Our work increases by nine-fold the pressure range of single-shock measurements, in comparison with gas-gun experiments, and yields results in agreement with the Saumon, Chabrier and Van Horn (1994) equation of state for helium. This changes the internal structures inferred for Jupiter-size planets, relative to models based on earlier equations of state (e. g., SESAME).

  16. Uncertainties in detecting decadal change in extractable soil elements in Northern Forests

    NASA Astrophysics Data System (ADS)

    Bartlett, O.; Bailey, S. W.; Ducey, M. J.

    2016-12-01

    Northern Forest ecosystems have been or are being impacted by land use change, forest harvesting, acid deposition, atmospheric CO2 enrichment, and climate change. Each of these has the potential to modify soil forming processes, and the resulting chemical stocks. Horizontal and vertical variations in concentrations complicate determination of temporal change. This study evaluates sample design, sample size, and differences among observers as sources of uncertainty when quantifying soil temporal change over regional scales. Forty permanent, northern hardwood, monitoring plots were established on the White Mountain National Forest in central New Hampshire and western Maine. Soil pits were characterized and sampled by genetic horizon at plot center in 2001 and resampled again in 2014 two-meters on contour from the original sampling location. Each soil horizon was characterized by depth, color, texture, structure, consistency, boundaries, coarse fragments, and roots from the forest floor to the upper C horizon, the relatively unaltered glacial till parent material. Laboratory analyses included pH in 0.01 M CaCl2 solution and extractable Ca, Mg, Na, K, Al, Mn, and P in 1 M NH4OAc solution buffered at pH 4.8. Significant elemental differences were identified by genetic horizon from paired t-tests (p ≤ 0.05) indicate temporal change across the study region. Power analysis, 0.9 power (α = 0.05), revealed sampling size was appropriate within this region to detect concentration change by genetic horizon using a stratified sample design based on topographic metrics. There were no significant differences between observers' descriptions of physical properties. As physical properties would not be expected to change over a decade, this suggests spatial variation in physical properties between the pairs of sampling pits did not detract from our ability to detect temporal change. These results suggest that resampling efforts within a site, repeated across a region, to quantify elemental change by carefully described genetic horizons is an appropriate method of detecting soil temporal change in this region. Sample size and design considerations from this project will have direct implications for future monitoring programs to characterize change in soil chemistry.

  17. Spatial patterns of self-recruitment of a coral reef fish in relation to island-scale retention mechanisms.

    PubMed

    Beldade, Ricardo; Holbrook, Sally J; Schmitt, Russell J; Planes, Serge; Bernardi, Giacomo

    2016-10-01

    Oceanographic features influence the transport and delivery of marine larvae, and physical retention mechanisms, such as eddies, can enhance self-recruitment (i.e. the return of larvae to their natal population). Knowledge of exact locations of hatching (origin) and settlement (arrival) of larvae of reef animals provides a means to compare observed patterns of self-recruitment 'connectivity' with those expected from water circulation patterns. Using parentage inference based on multiple sampling years in Moorea, French Polynesia, we describe spatial and temporal variation in self-recruitment of the anemonefish Amphiprion chrysopterus, evaluate the consistency of net dispersal distances of self-recruits against the null expectation of passive particle dispersal and test the hypothesis that larvae originating in certain reef habitats (lagoons and passes) would be retained and thus more likely to self-recruit than those originating on the outer (fore) reef. Estimates of known self-recruitment were consistent across the sampling years (~25-27% of sampled recruits). For most (88%) of these self-recruits, the net distance between hatching and settlement locations was within the maximum dispersal distance expected for a neutrally buoyant passive particle based on the longest duration of the larval dispersive phase and the average direction and speed of current flow around Moorea. Furthermore, a parent of a given body size on the outer (fore) reef of Moorea was less likely to produce self-recruits than those in passes. Our findings show that even a simple dispersal model based on net average flow and direction of alongshore currents can provide insight into landscape-scale retention patterns of reef fishes. © 2016 John Wiley & Sons Ltd.

  18. Do Demographic Characteristics Make Differences? Demographic Characteristics as Moderators in the Associations between Only Child Status and Cognitive/Non-cognitive Outcomes in China

    PubMed Central

    Liu, Ning; Chen, Yiting; Yang, Xiangdong; Hu, Yi

    2017-01-01

    Different family compositions and sizes may affect child development through the different modes of interaction between family members. Previous studies have compared only children with non-only children in cognitive/non-cognitive outcomes. However, relatively little research has systematically investigated the potential moderators among them. Using a large and representative sample of Chinese students (Grades 7–8; N = 5,752), this study examines the roles of demographic characteristics, such as gender, region, parental educational level, parental expectations, family socio-economic status and family structure, in the associations between only child status and cognitive/non-cognitive outcomes. For the cognitive outcomes, only child status exerts an influence on the students' academic performance in Chinese and mathematics in the sample of three districts' students. The examined associations between only child status and cognitive outcomes are different in region, parental education, parental expectations and family structure, while gender and family socio-economic status did not. For the non-cognitive outcomes, only child status exerts an influence on the students' school well-being, academic self-efficacy, academic self-concept, and internal academic motivation in the full sample of students, but not on external academic motivation. Further, the examined associations between only child status and non-cognitive outcomes are different in region, parental education, family socio-economic status and family structure, while gender and parental expectations did not. These findings suggest that the associations between only child status and cognitive/non-cognitive outcomes are heterogeneous in terms of some of the demographic characteristics. Possible explanations are proposed in some concepts of region and family environment in China. PMID:28421006

  19. Do Demographic Characteristics Make Differences? Demographic Characteristics as Moderators in the Associations between Only Child Status and Cognitive/Non-cognitive Outcomes in China.

    PubMed

    Liu, Ning; Chen, Yiting; Yang, Xiangdong; Hu, Yi

    2017-01-01

    Different family compositions and sizes may affect child development through the different modes of interaction between family members. Previous studies have compared only children with non-only children in cognitive/non-cognitive outcomes. However, relatively little research has systematically investigated the potential moderators among them. Using a large and representative sample of Chinese students (Grades 7-8; N = 5,752), this study examines the roles of demographic characteristics, such as gender, region, parental educational level, parental expectations, family socio-economic status and family structure, in the associations between only child status and cognitive/non-cognitive outcomes. For the cognitive outcomes, only child status exerts an influence on the students' academic performance in Chinese and mathematics in the sample of three districts' students. The examined associations between only child status and cognitive outcomes are different in region, parental education, parental expectations and family structure, while gender and family socio-economic status did not. For the non-cognitive outcomes, only child status exerts an influence on the students' school well-being, academic self-efficacy, academic self-concept, and internal academic motivation in the full sample of students, but not on external academic motivation. Further, the examined associations between only child status and non-cognitive outcomes are different in region, parental education, family socio-economic status and family structure, while gender and parental expectations did not. These findings suggest that the associations between only child status and cognitive/non-cognitive outcomes are heterogeneous in terms of some of the demographic characteristics. Possible explanations are proposed in some concepts of region and family environment in China.

  20. Thermoelectric Properties Studies on n-type Bi2Te3-xSex

    NASA Astrophysics Data System (ADS)

    Yang, Jian; Yan, Xiao; Ma, Yi; Poudel, Bed; Lan, Yucheng; Wang, D. Z.; Ren, Z. F.; Hao, Q.; Chen, G.

    2008-03-01

    Bi2Te3-xSex is a classic room temperature n-type thermoelectric material. In spite of the long history of research, its ZT is still below 1. By directly making nano sized particles using mechanical alloy from element, then pressing the nanoparticles into 100% dense bulk sample with nano-structures by hot press, we expect to decrease the thermal conductivity by the increased grain boundary scattering of phonons so to improve the ZT above 1. The ratio of Te/Se was varied systematically to investigate its effect on thermal conductivity.

  1. Numerical distance effect size is a poor metric of approximate number system acuity.

    PubMed

    Chesney, Dana

    2018-04-12

    Individual differences in the ability to compare and evaluate nonsymbolic numerical magnitudes-approximate number system (ANS) acuity-are emerging as an important predictor in many research areas. Unfortunately, recent empirical studies have called into question whether a historically common ANS-acuity metric-the size of the numerical distance effect (NDE size)-is an effective measure of ANS acuity. NDE size has been shown to frequently yield divergent results from other ANS-acuity metrics. Given these concerns and the measure's past popularity, it behooves us to question whether the use of NDE size as an ANS-acuity metric is theoretically supported. This study seeks to address this gap in the literature by using modeling to test the basic assumption underpinning use of NDE size as an ANS-acuity metric: that larger NDE size indicates poorer ANS acuity. This assumption did not hold up under test. Results demonstrate that the theoretically ideal relationship between NDE size and ANS acuity is not linear, but rather resembles an inverted J-shaped distribution, with the inflection points varying based on precise NDE task methodology. Thus, depending on specific methodology and the distribution of ANS acuity in the tested population, positive, negative, or null correlations between NDE size and ANS acuity could be predicted. Moreover, peak NDE sizes would be found for near-average ANS acuities on common NDE tasks. This indicates that NDE size has limited and inconsistent utility as an ANS-acuity metric. Past results should be interpreted on a case-by-case basis, considering both specifics of the NDE task and expected ANS acuity of the sampled population.

  2. Addressing adoption and research design decisions simultaneously: the role of value of sample information analysis.

    PubMed

    McKenna, Claire; Claxton, Karl

    2011-01-01

    to estimate the cost-effectiveness of technologies are well developed with increasing experience of their application to inform adoption decisions in a timely way. However, the experience of using similarly explicit methods to inform the associated research decisions is less well developed despite appropriate methods being available with an increasing number of applications in health. The authors demonstrate that evaluation of both adoption and research decisions is feasible within typical time and resource constraints relevant to policy decisions, even in situations in which data are sparse and formal elicitation is required. In addition to demonstrating the application of expected value of sample information (EVSI) in these circumstances, the authors examine and carefully distinguish the impact that the research decision is expected to have on patients while enrolled in the trial, those not enrolled, and once the trial reports. In doing so, the authors are able to account for the range of opportunity cost associated with research and evaluate a number of s including length of follow-up and sample size. The authors also explore the implications for research design of conducting research while the technology is approved for widespread use and whether approval should be withheld until research reports. In doing so, the authors highlight the impact of irrecoverable opportunity costs when the initial costs of a technology are compensated only by later gains in health outcome.

  3. Multilevel sequential Monte Carlo samplers

    DOE PAGES

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; ...

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level h L. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h 0>h 1 ...>h L. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less

  4. Nine demographic factors and their relationship to attitudes toward abortion legalization.

    PubMed

    Mileti, D S; Barnett, L D

    1972-03-01

    The 1967 Gallup Poll on attitudes toward abortion legislation taken for the Population Council was studied by multivariate analysis of 9 demographic factors: age, family income, occupation of household head, race, section of the country, sex, city size, education and religion. The poll was taken in two waves that totaled 6,065 cases (after weighting for a representative sample and elimination of "don't know" and "no answer" responses). The question asked for approval or disapproval of the legalization of abortion for the four "hard" reasons: mother's health, rape, incest, or expected child deformity. The analysis revealed that age, family income, occupation of household head, race, section of the country, and sex did not in themselves have an effect on attitudes towards abortion legalization, though they sometimes were an influence in combination with other variables. Significant statistical correlations were found between approval of abortion legalization and increasing city size and higher educational level. Abortion approval also increases along a religious scale from Jewish-Protestant-Catholic. The most significant theoretical conclusion of the study was that 6 of the 9 factors were not influential on abortion attitudes and the remaining 3 did not have strong predictive-explanatory power as expected. Re-examination of the causes of abortion attitudes is needed.

  5. Epidemiology of Salmonella sp. in California cull dairy cattle: prevalence of fecal shedding and diagnostic accuracy of pooled enriched broth culture of fecal samples.

    PubMed

    Abu Aboud, Omran A; Adaska, John M; Williams, Deniece R; Rossitto, Paul V; Champagne, John D; Lehenbauer, Terry W; Atwill, Robert; Li, Xunde; Aly, Sharif S

    2016-01-01

    The primary objective of this cross-sectional study was to estimate the crude, seasonal and cull-reason stratified prevalence of Salmonella fecal shedding in cull dairy cattle on seven California dairies. A secondary objective was to estimate and compare the relative sensitivity (Se) and specificity (Sp) for pools of 5 and 10 enriched broth cultures of fecal samples for Salmonella sp. detection. Seven dairy farms located in the San Joaquin Valley of California were identified and enrolled in the study as a convenience sample. Cull cows were identified for fecal sampling once during each season between 2014 and 2015, specifically during spring, summer, fall, and winter, and 10 cows were randomly selected for fecal sampling at the day of their sale. In addition, study personnel completed a survey based on responses of the herd manager to questions related to the previous four month's herd management. Fecal samples were frozen until testing for Salmonella. After overnight enrichment in liquid broth, pools of enrichment broth (EBP) were created for 5 and 10 samples. All individual and pooled broths were cultured on selective media with putative Salmonella colonies confirmed by biochemical testing before being serogrouped and serotyped. A total of 249 cull cows were enrolled into the study and their fecal samples tested for Salmonella. The survey-weighted period prevalence of fecal shedding of all Salmonella sp. in the cull cow samples across all study herds and the entire study period was 3.42% (N = 249; SE 1.07). The within herd prevalence of Salmonella shed in feces did not differ over the four study seasons (P = 0.074). The Se of culture of EBP of five samples was 62.5% (SE = 17.12), which was not statistically different from the Se of culture of EBP of 10 (37.5%, SE = 17.12, P = 0.48). The Sp of culture of EBP of five samples was 95.24% (SE = 3.29) and for pools of 10 samples was 100.00% (SE = 0). There was no statistical difference between the culture relative specificities of EBP of 5 and 10 (P > 0.99). Our study showed a numerically higher prevalence of Salmonella shedding in the summer, although the results were not significant, most likely due to a lack of power from the small sample size. A higher prevalence in summer months may be related to heat stress. To detect Salmonella, investigators may expect a 62.5% sensitivity for culture of EBP of five, relative to individual fecal sample enrichment and culture. In contrast, culture of EBP of 10 samples resulted in a numerically lower Se. Culture of EBP of size 5 or 10 samples, given similar prevalence and limit of detection, can be expected to yield specificities of 95 and 100%, respectively.

  6. Epidemiology of Salmonella sp. in California cull dairy cattle: prevalence of fecal shedding and diagnostic accuracy of pooled enriched broth culture of fecal samples

    PubMed Central

    Abu Aboud, Omran A.; Adaska, John M.; Williams, Deniece R.; Rossitto, Paul V.; Champagne, John D.; Lehenbauer, Terry W.; Atwill, Robert; Li, Xunde

    2016-01-01

    Background The primary objective of this cross-sectional study was to estimate the crude, seasonal and cull-reason stratified prevalence of Salmonella fecal shedding in cull dairy cattle on seven California dairies. A secondary objective was to estimate and compare the relative sensitivity (Se) and specificity (Sp) for pools of 5 and 10 enriched broth cultures of fecal samples for Salmonella sp. detection. Methods Seven dairy farms located in the San Joaquin Valley of California were identified and enrolled in the study as a convenience sample. Cull cows were identified for fecal sampling once during each season between 2014 and 2015, specifically during spring, summer, fall, and winter, and 10 cows were randomly selected for fecal sampling at the day of their sale. In addition, study personnel completed a survey based on responses of the herd manager to questions related to the previous four month’s herd management. Fecal samples were frozen until testing for Salmonella. After overnight enrichment in liquid broth, pools of enrichment broth (EBP) were created for 5 and 10 samples. All individual and pooled broths were cultured on selective media with putative Salmonella colonies confirmed by biochemical testing before being serogrouped and serotyped. Results A total of 249 cull cows were enrolled into the study and their fecal samples tested for Salmonella. The survey-weighted period prevalence of fecal shedding of all Salmonella sp. in the cull cow samples across all study herds and the entire study period was 3.42% (N = 249; SE 1.07). The within herd prevalence of Salmonella shed in feces did not differ over the four study seasons (P = 0.074). The Se of culture of EBP of five samples was 62.5% (SE = 17.12), which was not statistically different from the Se of culture of EBP of 10 (37.5%, SE = 17.12, P = 0.48). The Sp of culture of EBP of five samples was 95.24% (SE = 3.29) and for pools of 10 samples was 100.00% (SE = 0). There was no statistical difference between the culture relative specificities of EBP of 5 and 10 (P > 0.99). Discussion Our study showed a numerically higher prevalence of Salmonella shedding in the summer, although the results were not significant, most likely due to a lack of power from the small sample size. A higher prevalence in summer months may be related to heat stress. To detect Salmonella, investigators may expect a 62.5% sensitivity for culture of EBP of five, relative to individual fecal sample enrichment and culture. In contrast, culture of EBP of 10 samples resulted in a numerically lower Se. Culture of EBP of size 5 or 10 samples, given similar prevalence and limit of detection, can be expected to yield specificities of 95 and 100%, respectively. PMID:27635350

  7. A Kolmogorov-Smirnov test for the molecular clock based on Bayesian ensembles of phylogenies

    PubMed Central

    Antoneli, Fernando; Passos, Fernando M.; Lopes, Luciano R.

    2018-01-01

    Divergence date estimates are central to understand evolutionary processes and depend, in the case of molecular phylogenies, on tests of molecular clocks. Here we propose two non-parametric tests of strict and relaxed molecular clocks built upon a framework that uses the empirical cumulative distribution (ECD) of branch lengths obtained from an ensemble of Bayesian trees and well known non-parametric (one-sample and two-sample) Kolmogorov-Smirnov (KS) goodness-of-fit test. In the strict clock case, the method consists in using the one-sample Kolmogorov-Smirnov (KS) test to directly test if the phylogeny is clock-like, in other words, if it follows a Poisson law. The ECD is computed from the discretized branch lengths and the parameter λ of the expected Poisson distribution is calculated as the average branch length over the ensemble of trees. To compensate for the auto-correlation in the ensemble of trees and pseudo-replication we take advantage of thinning and effective sample size, two features provided by Bayesian inference MCMC samplers. Finally, it is observed that tree topologies with very long or very short branches lead to Poisson mixtures and in this case we propose the use of the two-sample KS test with samples from two continuous branch length distributions, one obtained from an ensemble of clock-constrained trees and the other from an ensemble of unconstrained trees. Moreover, in this second form the test can also be applied to test for relaxed clock models. The use of a statistically equivalent ensemble of phylogenies to obtain the branch lengths ECD, instead of one consensus tree, yields considerable reduction of the effects of small sample size and provides a gain of power. PMID:29300759

  8. [Analysis of the patient safety culture in hospitals of the Spanish National Health System].

    PubMed

    Saturno, P J; Da Silva Gama, Z A; de Oliveira-Sousa, S L; Fonseca, Y A; de Souza-Oliveira, A C; Castillo, Carmen; López, M José; Ramón, Teresa; Carrillo, Andrés; Iranzo, M Dolores; Soria, Victor; Saturno, Pedro J; Parra, Pedro; Gomis, Rafael; Gascón, Juan José; Martinez, José; Arellano, Carmen; Gama, Zenewton A Da Silva; de Oliveira-Sousa, Silvana L; de Souza-Oliveira, Adriana C; Fonseca, Yadira A; Ferreira, Marta Sobral

    2008-12-01

    A safety culture is essential to minimize errors and adverse events. Its measurement is needed to design activities in order to improve it. This paper describes the methods and main results of a study on safety climate in a nation-wide representative sample of public hospitals of the Spanish NHS. The Hospital Survey on Patient Safety Culture questionnaire was distributed to a random sample of health professionals in a representative sample of 24 hospitals, proportionally stratified by hospital size. Results are analyzed to provide a description of safety climate, its strengths and weaknesses. Differences by hospital size, type of health professional and service are analyzed using ANOVA. A total of 2503 responses are analyzed (response rate: 40%, (93% from professionals with direct patient contact). A total of 50% gave patient safety a score from 6 to 8 (on a 10-point scale); 95% reported < 2 events last year. Dimensions "Teamwork within hospital units" (71.8 [1.8]) and "Supervisor/Manager expectations and actions promoting safety" (61.8 [1.7]) have the highest percentage of positive answers. "Staffing", "Teamwork across hospital units", "Overall perceptions of safety" and "Hospital management support for patient safety" could be identified as weaknesses. Significant differences by hospital size, type of professional and service suggest a generally more positive attitude in small hospitals and Pharmacy services, and a more negative one in physicians. Strengths and weaknesses of the safety climate in the hospitals of the Spanish NHS have been identified and they are used to design appropriate strategies for improvement.

  9. On evaluating compliance with air pollution levels not to be exceeded more than once per year

    NASA Technical Reports Server (NTRS)

    Neustadter, H. E.; Sidik, S. M.

    1974-01-01

    The adequacy is considered of currently practiced monitoring and data reduction techniques for assessing compliance with 24-hour Air Quality Standards (AQS) not to be exceeded more than once per year. The present situation for suspended particulates is discussed. The following conclusions are reached: (1) For typical less than daily sampling (i.e., 60 to 120 24-hour samples per year) the deviation from independence of the data set should not be substantial. (2) The interchange of exponentiation and expectation operations in the EPA data reduction model, underestimates the second highest level by about 4 to 8 percent for typical sigma values. (3) Estimates of the second highest pollution level have associated with them a large statistical variability arising from the finite size of the sample. The 0.95 confidence interval ranges from + or - 40 percent for 120 samples per year to + or - 84 percent for 30 samples per year. (4) The design value suggested by EPA for abatement and/or control planning purposes typically gives a margin of safety of 60 to 120 percent.

  10. Food portion size area mediates energy effects on expected anxiety in anorexia nervosa.

    PubMed

    Herzog, Musya; Douglas, Christopher R; Kissileff, Harry R; Brunstrom, Jeffrey M; Halmi, Katherine Ann

    2017-05-01

    A study in which adolescent patients with anorexia nervosa (n = 24) rated their expected food-anxiety in response to images of portions of food (potatoes, rice pizza, and M&Ms) showed that lower energy-dense foods elicited higher expected anxiety per kilocalorie than higher energy-dense foods. However, the area of the portion sizes could be an unmeasured variable driving the anxiety response. To test the hypothesis that area mediates the effects of energy content on expected anxiety, the same images of portions were measured in area (cm 2 ), and standardized values of expected anxiety were regressed from standardized values of energy and area of portions. With regression of expected anxiety from portion size in area, M&Ms, which had the highest energy density of the four foods, elicited the highest expected anxiety slope (β = 1.75), which was significantly different from the expected anxiety slopes of the other three foods (β range = 0.67 - 0.96). Area was confirmed as a mediator of energy effects from loss of significance of the slopes when area was added to the regression of expected anxiety from energy x food. When expected anxiety was regressed from food, area, energy and area by energy interaction, area accounted for 5.7 times more variance than energy, and β for area (0.7) was significantly larger (by 0.52, SE = 0.15, t = 3.4, p = 0.0007) than β for energy (0.19). Area could be a learned cue for the energy content of food portions, and thus, for weight gain potential, which triggers anxiety in patients with anorexia nervosa. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  11. A goal attainment pain management program for older adults with arthritis.

    PubMed

    Davis, Gail C; White, Terri L

    2008-12-01

    The purpose of this study was to test a pain management intervention that integrates goal setting with older adults (age > or =65) living independently in residential settings. This preliminary testing of the Goal Attainment Pain Management Program (GAPMAP) included a sample of 17 adults (mean age 79.29 years) with self-reported pain related to arthritis. Specific study aims were to: 1) explore the use of individual goal setting; 2) determine participants' levels of goal attainment; 3) determine whether changes occurred in the pain management methods used and found to be helpful by GAPMAP participants; and 4) determine whether changes occurred in selected pain-related variables (i.e., experience of living with persistent pain, the expected outcomes of pain management, pain management barriers, and global ratings of perceived pain intensity and success of pain management). Because of the small sample size, both parametric (t test) and nonparametric (Wilcoxon signed rank test) analyses were used to examine differences from pretest to posttest. Results showed that older individuals could successfully participate in setting and attaining individual goals. Thirteen of the 17 participants (76%) met their goals at the expected level or above. Two management methods (exercise and using a heated pool, tub, or shower) were used significantly more often after the intervention, and two methods (exercise and distraction) were identified as significantly more helpful. Two pain-related variables (experience of living with persistent pain and expected outcomes of pain management) revealed significant change, and all of those tested showed overall improvement.

  12. A Computer Simulation Study of Vntr Population Genetics: Constrained Recombination Rules Out the Infinite Alleles Model

    PubMed Central

    Harding, R. M.; Boyce, A. J.; Martinson, J. J.; Flint, J.; Clegg, J. B.

    1993-01-01

    Extensive allelic diversity in variable numbers of tandem repeats (VNTRs) has been discovered in the human genome. For population genetic studies of VNTRs, such as forensic applications, it is important to know whether a neutral mutation-drift balance of VNTR polymorphism can be represented by the infinite alleles model. The assumption of the infinite alleles model that each new mutant is unique is very likely to be violated by unequal sister chromatid exchange (USCE), the primary process believed to generate VNTR mutants. We show that increasing both mutation rates and misalignment constraint for intrachromosomal recombination in a computer simulation model reduces simulated VNTR diversity below the expectations of the infinite alleles model. Maximal constraint, represented as slippage of single repeats, reduces simulated VNTR diversity to levels expected from the stepwise mutation model. Although misalignment rule is the more important variable, mutation rate also has an effect. At moderate rates of USCE, simulated VNTR diversity fluctuates around infinite alleles expectation. However, if rates of USCE are high, as for hypervariable VNTRs, simulated VNTR diversity is consistently lower than predicted by the infinite alleles model. This has been observed for many VNTRs and accounted for by technical problems in distinguishing alleles of neighboring size classes. We use sampling theory to confirm the intrinsically poor fit to the infinite alleles model of both simulated VNTR diversity and observed VNTR polymorphisms sampled from two Papua New Guinean populations. PMID:8293988

  13. A computer simulation study of VNTR population genetics: constrained recombination rules out the infinite alleles model.

    PubMed

    Harding, R M; Boyce, A J; Martinson, J J; Flint, J; Clegg, J B

    1993-11-01

    Extensive allelic diversity in variable numbers of tandem repeats (VNTRs) has been discovered in the human genome. For population genetic studies of VNTRs, such as forensic applications, it is important to know whether a neutral mutation-drift balance of VNTR polymorphism can be represented by the infinite alleles model. The assumption of the infinite alleles model that each new mutant is unique is very likely to be violated by unequal sister chromatid exchange (USCE), the primary process believed to generate VNTR mutants. We show that increasing both mutation rates and misalignment constraint for intrachromosomal recombination in a computer simulation model reduces simulated VNTR diversity below the expectations of the infinite alleles model. Maximal constraint, represented as slippage of single repeats, reduces simulated VNTR diversity to levels expected from the stepwise mutation model. Although misalignment rule is the more important variable, mutation rate also has an effect. At moderate rates of USCE, simulated VNTR diversity fluctuates around infinite alleles expectation. However, if rates of USCE are high, as for hypervariable VNTRs, simulated VNTR diversity is consistently lower than predicted by the infinite alleles model. This has been observed for many VNTRs and accounted for by technical problems in distinguishing alleles of neighboring size classes. We use sampling theory to confirm the intrinsically poor fit to the infinite alleles model of both simulated VNTR diversity and observed VNTR polymorphisms sampled from two Papua New Guinean populations.

  14. A computer simulation study of VNTR population genetics: Constrained recombination rules out the infinite alleles model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harding, R.M.; Martinson, J.J.; Flint, J.

    1993-11-01

    Extensive allelic diversity in variable numbers of tandem repeats (VNTRs) has been discovered in the human genome. For population genetic studies of VNTRs, such as forensic applications, it is important to know whether a neutral mutation-drift balance of VNTR polymorphism can be represented by the infinite alleles model. The assumption of the infinite alleles model that each new mutant is unique is very likely to be violated by unequal sister chromatid exchange (USCE), the primary process believed to generate VNTR mutants. The authors show that increasing both mutation rates and misalignment constraint for intrachromosomal recombination in a computer simulation modelmore » reduces simulated VNTR diversity below the expectations of the infinite alleles model. Maximal constraint, represented as slippage of single repeats, reduces simulated VNTR diversity to levels expected from the stepwise mutation model. Although misalignment rule is the more important variable, mutation rate also has an effect. At moderate rates of USCE, simulated VNTR diversity fluctuates around infinite alleles expectation. However, if rates of USCE are high, as for hypervariable VNTRs, simulated VNTR diversity is consistently lower than predicted by the infinite alleles model. This has been observed for many VNTRs and accounted for by technical problems in distinguishing alleles of neighboring size classes. The authors use sampling theory to confirm the intrinsically poor fit to the infinite model of both simulated VNTR diversity and observed VNTR polymorphisms sampled from two Papua New Guinean populations. 25 refs., 20 figs., 4 tabs.« less

  15. Novel tretinoin formulations: a drug-in-cyclodextrin-in-liposome approach.

    PubMed

    Ascenso, Andreia; Cruz, Mariana; Euletério, Carla; Carvalho, Filomena A; Santos, Nuno C; Marques, Helena C; Simões, Sandra

    2013-09-01

    The aims of this experimental work were the incorporation and full characterization of the system Tretinoin-in-dimethyl-beta-cyclodextrin-in-ultradeformable vesicles (Tretinoin-CyD-UDV) and Tretinoin-in-ultradeformable vesicles (Tretinoin-UDV). The Tretinoin-CyD complex was prepared by kneading and the UDV by adding soybean phosphatidylcholine (SPC) to Tween® 80 followed by an appropriate volume of sodium phosphate buffer solution to make a 10%-20% lipid suspension. The resulting suspension was brought to the final mean vesicles size, of approximately 150 nm, by sequential filtration. The physicochemical characterization was based on: the evaluation of mean particle size and polydispersity index (PI) measured by photon correlation spectroscopy (PCS) and atomic force microscopy (AFM) topographic imaging; zeta potential (ζ-potential) and the SPC concentration determined by Laser-Doppler anemometry and an enzymatic-colorimetric test, respectively. The quantification of the incorporated Tretinoin and its chemical stability (during preparation and storage) was assayed by a HPLC at 342 nm. It was possible to obtain the system Tretinoin-CyD-UDV. The mean vesicle size was the most stable parameter during experiments time course. AFM showed that Tretinoin-CyD-UDV samples were very heterogeneous in size, having three distinct subpopulations, while Tretinoin-UDV samples had only one homogeneous size population. The results of the ζ-potential measurements have shown that vesicle surface charge was low, as expected, presenting negative values. The incorporation efficiency was high, and no significant differences between Tretinoin-CyD-UDV and Tretinoin-UDV were observed. However, only Tretinoin-UDV with 20% lipid concentration formulation remained chemically stable during the evaluation period. According to our results, Tretinoin-UDV with 20% lipid concentration seems to be a better approach than Tretinoin-CyD-UDV, attending to the higher chemical stability.

  16. Systematic review of the multidimensional fatigue symptom inventory-short form.

    PubMed

    Donovan, Kristine A; Stein, Kevin D; Lee, Morgan; Leach, Corinne R; Ilozumba, Onaedo; Jacobsen, Paul B

    2015-01-01

    Fatigue is a subjective complaint that is believed to be multifactorial in its etiology and multidimensional in its expression. Fatigue may be experienced by individuals in different dimensions as physical, mental, and emotional tiredness. The purposes of this study were to review and characterize the use of the 30-item Multidimensional Fatigue Symptom Inventory-Short Form (MFSI-SF) in published studies and to evaluate the available evidence for its psychometric properties. A systematic review was conducted to identify published articles reporting results for the MFSI-SF. Data were analyzed to characterize internal consistency reliability of multi-item MFSI-SF scales and test-retest reliability. Correlation coefficients were summarized to characterize concurrent, convergent, and divergent validity. Standardized effect sizes were calculated to characterize the discriminative validity of the MFSI-SF and its sensitivity to change. Seventy articles were identified. Sample sizes reported ranged from 10 to 529 and nearly half consisted exclusively of females. More than half the samples were composed of cancer patients; of those, 59% were breast cancer patients. Mean alpha coefficients for MFSI-SF fatigue subscales ranged from 0.84 for physical fatigue to 0.93 for general fatigue. The MFSI-SF demonstrated moderate test-retest reliability in a small number of studies. Correlations with other fatigue and vitality measures were moderate to large in size and in the expected direction. The MFSI-SF fatigue subscales were positively correlated with measures of distress, depressive, and anxious symptoms. Effect sizes for discriminative validity ranged from medium to large, while effect sizes for sensitivity to change ranged from small to large. Findings demonstrate the positive psychometric properties of the MFSI-SF, provide evidence for its usefulness in medically ill and nonmedically ill individuals, and support its use in future studies.

  17. Climate change affects low trophic level marine consumers: warming decreases copepod size and abundance.

    PubMed

    Garzke, Jessica; Ismar, Stefanie M H; Sommer, Ulrich

    2015-03-01

    Concern about climate change has re-ignited interest in universal ecological responses to temperature variations: (1) biogeographical shifts, (2) phenology changes, and (3) size shifts. In this study we used copepods as model organisms to study size responses to temperature because of their central role in the pelagic food web and because of the ontogenetic length constancy between molts, which facilitates the definition of size of distinct developmental stages. In order to test the expected temperature-induced shifts towards smaller body size and lower abundances under warming conditions, a mesocosm experiment using plankton from the Baltic Sea at three temperature levels (ambient, ambient +4 °C, ambient -4 °C) was performed in summer 2010. Overall copepod and copepodit abundances, copepod size at all life stages, and adult copepod size in particular, showed significant temperature effects. As expected, zooplankton peak abundance was lower in warm than in ambient treatments. Copepod size-at-immature stage significantly increased in cold treatments, while adult size significantly decreased in warm treatments.

  18. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    PubMed

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  19. A pleiotropy-informed Bayesian false discovery rate adapted to a shared control design finds new disease associations from GWAS summary statistics.

    PubMed

    Liley, James; Wallace, Chris

    2015-02-01

    Genome-wide association studies (GWAS) have been successful in identifying single nucleotide polymorphisms (SNPs) associated with many traits and diseases. However, at existing sample sizes, these variants explain only part of the estimated heritability. Leverage of GWAS results from related phenotypes may improve detection without the need for larger datasets. The Bayesian conditional false discovery rate (cFDR) constitutes an upper bound on the expected false discovery rate (FDR) across a set of SNPs whose p values for two diseases are both less than two disease-specific thresholds. Calculation of the cFDR requires only summary statistics and have several advantages over traditional GWAS analysis. However, existing methods require distinct control samples between studies. Here, we extend the technique to allow for some or all controls to be shared, increasing applicability. Several different SNP sets can be defined with the same cFDR value, and we show that the expected FDR across the union of these sets may exceed expected FDR in any single set. We describe a procedure to establish an upper bound for the expected FDR among the union of such sets of SNPs. We apply our technique to pairwise analysis of p values from ten autoimmune diseases with variable sharing of controls, enabling discovery of 59 SNP-disease associations which do not reach GWAS significance after genomic control in individual datasets. Most of the SNPs we highlight have previously been confirmed using replication studies or larger GWAS, a useful validation of our technique; we report eight SNP-disease associations across five diseases not previously declared. Our technique extends and strengthens the previous algorithm, and establishes robust limits on the expected FDR. This approach can improve SNP detection in GWAS, and give insight into shared aetiology between phenotypically related conditions.

  20. Survival analysis and classification methods for forest fire size

    PubMed Central

    2018-01-01

    Factors affecting wildland-fire size distribution include weather, fuels, and fire suppression activities. We present a novel application of survival analysis to quantify the effects of these factors on a sample of sizes of lightning-caused fires from Alberta, Canada. Two events were observed for each fire: the size at initial assessment (by the first fire fighters to arrive at the scene) and the size at “being held” (a state when no further increase in size is expected). We developed a statistical classifier to try to predict cases where there will be a growth in fire size (i.e., the size at “being held” exceeds the size at initial assessment). Logistic regression was preferred over two alternative classifiers, with covariates consistent with similar past analyses. We conducted survival analysis on the group of fires exhibiting a size increase. A screening process selected three covariates: an index of fire weather at the day the fire started, the fuel type burning at initial assessment, and a factor for the type and capabilities of the method of initial attack. The Cox proportional hazards model performed better than three accelerated failure time alternatives. Both fire weather and fuel type were highly significant, with effects consistent with known fire behaviour. The effects of initial attack method were not statistically significant, but did suggest a reverse causality that could arise if fire management agencies were to dispatch resources based on a-priori assessment of fire growth potentials. We discuss how a more sophisticated analysis of larger data sets could produce unbiased estimates of fire suppression effect under such circumstances. PMID:29320497

  1. Survival analysis and classification methods for forest fire size.

    PubMed

    Tremblay, Pier-Olivier; Duchesne, Thierry; Cumming, Steven G

    2018-01-01

    Factors affecting wildland-fire size distribution include weather, fuels, and fire suppression activities. We present a novel application of survival analysis to quantify the effects of these factors on a sample of sizes of lightning-caused fires from Alberta, Canada. Two events were observed for each fire: the size at initial assessment (by the first fire fighters to arrive at the scene) and the size at "being held" (a state when no further increase in size is expected). We developed a statistical classifier to try to predict cases where there will be a growth in fire size (i.e., the size at "being held" exceeds the size at initial assessment). Logistic regression was preferred over two alternative classifiers, with covariates consistent with similar past analyses. We conducted survival analysis on the group of fires exhibiting a size increase. A screening process selected three covariates: an index of fire weather at the day the fire started, the fuel type burning at initial assessment, and a factor for the type and capabilities of the method of initial attack. The Cox proportional hazards model performed better than three accelerated failure time alternatives. Both fire weather and fuel type were highly significant, with effects consistent with known fire behaviour. The effects of initial attack method were not statistically significant, but did suggest a reverse causality that could arise if fire management agencies were to dispatch resources based on a-priori assessment of fire growth potentials. We discuss how a more sophisticated analysis of larger data sets could produce unbiased estimates of fire suppression effect under such circumstances.

  2. Simulation-based power calculation for designing interrupted time series analyses of health policy interventions.

    PubMed

    Zhang, Fang; Wagner, Anita K; Ross-Degnan, Dennis

    2011-11-01

    Interrupted time series is a strong quasi-experimental research design to evaluate the impacts of health policy interventions. Using simulation methods, we estimated the power requirements for interrupted time series studies under various scenarios. Simulations were conducted to estimate the power of segmented autoregressive (AR) error models when autocorrelation ranged from -0.9 to 0.9 and effect size was 0.5, 1.0, and 2.0, investigating balanced and unbalanced numbers of time periods before and after an intervention. Simple scenarios of autoregressive conditional heteroskedasticity (ARCH) models were also explored. For AR models, power increased when sample size or effect size increased, and tended to decrease when autocorrelation increased. Compared with a balanced number of study periods before and after an intervention, designs with unbalanced numbers of periods had less power, although that was not the case for ARCH models. The power to detect effect size 1.0 appeared to be reasonable for many practical applications with a moderate or large number of time points in the study equally divided around the intervention. Investigators should be cautious when the expected effect size is small or the number of time points is small. We recommend conducting various simulations before investigation. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Counter-propagating optical trapping system for size and refractive index measurement of microparticles.

    PubMed

    Flynn, Richard A; Shao, Bing; Chachisvilis, Mirianas; Ozkan, Mihrimah; Esener, Sadik C

    2006-01-15

    We propose and demonstrate a novel approach to measure the size and refractive index of microparticles based on two beam optical trapping, where forward scattered light is detected to give information about the particle. The counter-propagating optical trap measurement (COTM) system exploits the capability of optical traps to measure pico-Newton forces for microparticles' refractive index and size characterization. Different from the current best technique for microparticles' refractive index measurement, refractometry, a bulk technique requiring changing the fluid composition of the sample, our optical trap technique works with any transparent fluid and enables single particle analysis without the use of biological markers. A ray-optics model is used to explore the physical operation of the COTM system, predict system performance and aid system design. Experiments demonstrate the accuracy of refractive index measurement of Deltan=0.013 and size measurement of 3% of diameter with 2% standard deviation. Present performance is instrumentation limited, and a potential improvement by more than two orders of magnitude can be expected in the future. With further development in parallelism and miniaturization, the system offers advantages for cell manipulation and bioanalysis compatible with lab-on-a-chip systems.

  4. Effect of cationic surfactants on characteristics and colorimetric behavior of polydiacetylene/silica nanocomposite as time-temperature indicator

    NASA Astrophysics Data System (ADS)

    Nopwinyuwong, Atchareeya; Kitaoka, Takuya; Boonsupthip, Waraporn; Pechyen, Chiravoot; Suppakul, Panuwat

    2014-09-01

    Polydiacetylene (PDA)/silica nanocomposites were synthesized by self-assembly method using polymerizable amphiphilic diacetylene monomers, 10,12-pentacosadiynoic acid (PCDA). Addition of cationic surfactants (PDADMAC and CTAB) to PDA/SiO2 nanocomposites induced higher intermolecular force which affected their size, shape and color transition. Pure PDA, PDA/SiO2, PDA/SiO2/PDADMAC and PDA/SiO2/CTAB were investigated by particle size analysis, TEM, SEM, UV-vis spectroscopy and FT-IR. It was found that the PDA/SiO2 nanocomposites exhibited slightly larger particle sizes than those of other samples. The PDA/SiO2 nanocomposites with a core-shell structure were almost regarded as spherical-shaped particles. Cationic surfactants, especially CTAB, presumably affected the particle size and shape of PDA/SiO2 nanocomposites due to the disruption of hydrogen bonding between PDA head group and ammonium group. The colorimetric response of both PDA/SiO2/surfactant and surfactant-free PDA/SiO2 aqueous solutions directly changed in relation to time and temperature; thus they were expected to be applied as a new polymer-based time-temperature indicator (TTI).

  5. Determinants of corporate dividend policy in Indonesia

    NASA Astrophysics Data System (ADS)

    Lestari, H. S.

    2018-01-01

    This study aims to investigate the determinants factors that effect the dividend policy. The sample used in this research is manufacture companies listed in Indonesia Stock Exchange (IDX) and the period 2011 - 2015. There are independent variables such as earning, cash flow, free cash flow, debt, growth opportunities, investment opportunities, firm size, largest shareholder, firm risk, lagged dividend and dividend policy used as dependent variable. The study examines a total of 32 manufacture companies. After analyzing the data using the program software Eviews 9.0 by multiples regression analysis reveal that earning, cash flow, free cash flow, firm size, and lagged dividend have significant effect on dividend policy, whereas debt, growth opportunities, investment opportunities, largest shareholder, and firm risk have no significant effect on dividend policy. The results of this study are expected to be implemented by the financial managers in improving corporate profits and basic information as return on investment decisions.

  6. Pandoraviruses: amoeba viruses with genomes up to 2.5 Mb reaching that of parasitic eukaryotes.

    PubMed

    Philippe, Nadège; Legendre, Matthieu; Doutre, Gabriel; Couté, Yohann; Poirot, Olivier; Lescot, Magali; Arslan, Defne; Seltzer, Virginie; Bertaux, Lionel; Bruley, Christophe; Garin, Jérome; Claverie, Jean-Michel; Abergel, Chantal

    2013-07-19

    Ten years ago, the discovery of Mimivirus, a virus infecting Acanthamoeba, initiated a reappraisal of the upper limits of the viral world, both in terms of particle size (>0.7 micrometers) and genome complexity (>1000 genes), dimensions typical of parasitic bacteria. The diversity of these giant viruses (the Megaviridae) was assessed by sampling a variety of aquatic environments and their associated sediments worldwide. We report the isolation of two giant viruses, one off the coast of central Chile, the other from a freshwater pond near Melbourne (Australia), without morphological or genomic resemblance to any previously defined virus families. Their micrometer-sized ovoid particles contain DNA genomes of at least 2.5 and 1.9 megabases, respectively. These viruses are the first members of the proposed "Pandoravirus" genus, a term reflecting their lack of similarity with previously described microorganisms and the surprises expected from their future study.

  7. Using Social Cognitive Theory to Predict Medication Compliance Behavior in Patients with Depression in Southern United States in 2016 in a Cross-Sectional Study.

    PubMed

    Bennett, Britney; Sharma, Manoj; Bennett, Russell; Mawson, Anthony R; Buxbaum, Sarah G; Sung, Jung Hye

    2018-03-01

    Introduction: Depression is a major public health issue. One of the concerns in depression research and practice pertains to non-compliance to prescribed medications. The purpose of the study was to predict compliance with medication use for patients with depression using social cognitive theory (SCT). Based on this study it was envisaged that recommendations for interventions to enhance compliance for medication use could be developed for patients with depression. Methods: The study was conducted using cross sectional design (n=148) in southern United States with a convenience sample of clinic-based depression patients with a 37-item valid and reliable questionnaire. Sample size was calculated to be 148 using G*Power (five predictors with a 0.80 power at the 0.05 alpha level and an estimated effect size of 0.10 with an inflation by 10% for missing data). Social cognitive theory constructs of expectations, self-efficacy and self-efficacy in overcoming barriers, self-control, and environment were reified. Data were analyzed using multiple linear regression and multiple logistic regression analyses. Results: Self-control for taking medication for depression (P=0.04), expectations for taking medication for depression (P=0.025), age (P<0.0001) and race (P=0.04) were significantly related to intent for taking medication for depression (Adjusted R 2 = 0.183). In race, Blacks had lower intent to take medication for depression. Conclusion: Social cognitive theory is weakly predictive with low explained variance for taking medication for depression. It needs to be bolstered by newer theories like integrative model or multi-theory model of health behavior change for designing educational interventions aimed at enhancing compliance to medication for depression.

  8. SDSS-IV MaNGA: Galaxy Pair Fraction and Correlated Active Galactic Nuclei

    NASA Astrophysics Data System (ADS)

    Fu, Hai; Steffen, Joshua L.; Gross, Arran C.; Dai, Y. Sophia; Isbell, Jacob W.; Lin, Lihwai; Wake, David; Xue, Rui; Bizyaev, Dmitry; Pan, Kaike

    2018-04-01

    We have identified 105 galaxy pairs at z ∼ 0.04 with the MaNGA integral-field spectroscopic data. The pairs have projected separations between 1 and 30 kpc, and are selected to have radial velocity offsets less than 600 km s‑1 and stellar mass ratio between 0.1 and 1. The pair fraction increases with both the physical size of the integral-field unit and the stellar mass, consistent with theoretical expectations. We provide the best-fit analytical function of the pair fraction and find that ∼3% of M* galaxies are in close pairs. For both isolated galaxies and paired galaxies, active galactic nuclei (AGNs) are selected using emission-line ratios and Hα equivalent widths measured inside apertures at a fixed physical size. We find AGNs in ∼24% of the paired galaxies and binary AGNs in ∼13% of the pairs. To account for the selection biases in both the pair sample and the MaNGA sample, we compare the AGN comoving volume densities with those expected from the mass- and redshift-dependent AGN fractions. We find a strong (∼5×) excess of binary AGNs over random pairing and a mild (∼20%) deficit of single AGNs. The binary AGN excess increases from ∼2× to ∼6× as the projected separation decreases from 10–30 to 1–10 kpc. Our results indicate that the pairing of galaxies preserves the AGN duty cycle in individual galaxies but increases the population of binary AGNs through correlated activities. We suggest tidally induced galactic-scale shocks and AGN cross-ionization as two plausible channels to produce low-luminosity narrow-line-selected binary AGNs.

  9. A multicenter, randomized study to test immunosuppressive therapy with oral prednisone for the prevention of restenosis after percutaneous coronary interventions: cortisone plus BMS or DES versus BMS alone to eliminate restenosis (CEREA-DES) - study design and rationale.

    PubMed

    Ribichini, Flavio; Tomai, Fabrizio; De Luca, Giuseppe; Boccuzzi, Giacomo; Presbitero, Patrizia; Pesarini, Gabriele; Ferrero, Valeria; Ghini, Anna S; Pastori, Francesca; De Luca, Leonardo; Zavalloni, Denis; Soregaroli, Daniela; Garbo, Roberto; Franchi, Elena; Marino, Paolo; Minelli, Massimo; Vassanelli, Corrado

    2009-02-01

    Several randomized trials and registries have shown a reduction of restenosis after coronary angioplasty with drug-eluting stents (DESs) compared with bare metal stents (BMSs). However, cost-efficacy analysis and long-term outcome of DESs compared to BMSs deserve further assessment. Moreover, concern has been raised regarding adverse clinical events occurring late after DES implantation, in particular, late stent thrombosis related to the suspension of dual antiplatelet therapy. The use of a short-cycle oral treatment with prednisone at immunosuppressive dose after BMS implantation has shown remarkable efficacy in reducing restenosis in nondiabetic patients, with very low additional cost and without the need for long-term dual antiplatelet therapy. Such results are however limited by small sample size. Cortisone plus BMS or DES versus BMS alone to Eliminate Restenosis is an independent, prospective, multicenter, randomized study. It will randomize 375 nondiabetic patients with coronary artery disease in three different arms to BMS (control group), DES (DES group) or BMS followed by a 40-day prednisone treatment (prednisone group). The DES and the prednisone groups will be compared to the control group to investigate the expected clinical advantage. The primary endpoint of the study is the event-free survival of cardiovascular death, myocardial infarction and recurrence of ischemia-needing repeated target vessel revascularization at 1 year. Secondary endpoints are the event-free survival analysis at 2 and 3 years, the restenosis rate at 9 months, and cost-effectiveness at 1, 2 and 3 years. The expected primary endpoint rates are 90% for DESs and for prednisone-treated patients and 77% for BMSs. The study was designed as a superiority trial, to compare DES, and BMS and prednisone, with BMS alone. A sample size of 118 patients per group provides an 80% power, assuming a complete 12-month follow-up information available for each patient. To obviate for cases of drop out, the sample size was increased to 375 patients to be enrolled in five Italian hospitals. This study will provide a magnitude of the net clinical and economic benefits of DES and of the safety and efficacy of BMS and cortisone compared to the standard use of BMS alone in nondiabetic patients with coronary artery disease.

  10. Testing paleointensity determinations on recent lava flows and scorias from Miyakejima, Japan

    NASA Astrophysics Data System (ADS)

    Fukuma, K.

    2013-12-01

    Still no consensus has been reached on paleointensity method. Even the classical Thellier method has not been fully tested on recent lava flows with known geomagnetic field intensity based on a systematic sampling scheme. In this study, Thellier method was applied for 1983, 1962 and 1940 basaltic lava flows and scorias from Miyakejima, Japan. Several vertical lava sections and quenched scorias, which are quite variable in magnetic mineralogy and grain size, provide an unparalleled opportunity to test paleointensity methods. Thellier experiments were conducted on a completely automated three-component spinner magnetometer with thermal demagnetizer 'tspin'. Specimens were heated in air, applied laboratory field was 45 microT, and pTRM checks were performed at every two heating steps. Curie points and hysteresis properties were obtained on small fragments removed from cylindrical specimens. For lava flows sigmoidal curves were commonly observed on the Arai diagrams. Especially the interior part of lava flows always revealed sigmoidal patterns and sometimes resulted in erroneously blurred behaviors. The directions after zero-field heating were not necessarily stable in the course of the Thellier experiments. It was very difficult, for the interior part, to ascertain linear segments on Arai diagrams corresponding to the geomagnetic field intensity at the eruption. Upper and lower clinker samples also generally revealed sigmoidal or upward concave curves on Arai diagrams. Neither lower nor higher temperature portions of the sigmoids or concaves gave the expected geomagnetic field intensities. However, there were two exceptional cases of lava flows giving correct field intensities: upper clinkers with relatively low unblocking temperatures (< 400 deg.C) and lower clinkers with broad unblocking temperature ranges from room temperature to 600 deg.C. A most promising target for paleointensity experiments within the volcanic rocks is scoria. Scoria samples always carry single Curie temperatures higher than 500 deg.C, and the ratios of saturation remanence to saturation magnetization (Mr/Ms) of about 0.5 are indicative of truly single-domain low-titanium titanomagnetite. Unambiguous straight lines were always observed on Arai diagrams covering broad temperature ranges like the lower clinker samples, and the gradients gave the expected field values within a few percent errors. Thellier experiments applied for the recent lava flows did not successfully recover the expected field intensity from most samples. No linear segment was recognized or incorrect paleointensity values were obtained from short segments with limited temperature ranges. In Thellier or other types of paleointensity experiments laboratory alteration is checked in details, but if a sample once passed the alteration check, the TRM/NRM ratios of any limited temperature or field ranges were accepted as reflecting paleointensity. Previously published paleointensity data from lava flows should include much of such dubious data. Generally lava flows are not suitable for paleointensity determinations in light of its large grain-size and mixed magnetic mineralogy, except for scoria and clinker.

  11. Results of Large-Scale Spacecraft Flammability Tests

    NASA Technical Reports Server (NTRS)

    Ferkul, Paul; Olson, Sandra; Urban, David L.; Ruff, Gary A.; Easton, John; T'ien, James S.; Liao, Ta-Ting T.; Fernandez-Pello, A. Carlos; Torero, Jose L.; Eigenbrand, Christian; hide

    2017-01-01

    For the first time, a large-scale fire was intentionally set inside a spacecraft while in orbit. Testing in low gravity aboard spacecraft had been limited to samples of modest size: for thin fuels the longest samples burned were around 15 cm in length and thick fuel samples have been even smaller. This is despite the fact that fire is a catastrophic hazard for spaceflight and the spread and growth of a fire, combined with its interactions with the vehicle cannot be expected to scale linearly. While every type of occupied structure on earth has been the subject of full scale fire testing, this had never been attempted in space owing to the complexity, cost, risk and absence of a safe location. Thus, there is a gap in knowledge of fire behavior in spacecraft. The recent utilization of large, unmanned, resupply craft has provided the needed capability: a habitable but unoccupied spacecraft in low earth orbit. One such vehicle was used to study the flame spread over a 94 x 40.6 cm thin charring solid (fiberglasscotton fabric). The sample was an order of magnitude larger than anything studied to date in microgravity and was of sufficient scale that it consumed 1.5 of the available oxygen. The experiment which is called Saffire consisted of two tests, forward or concurrent flame spread (with the direction of flow) and opposed flame spread (against the direction of flow). The average forced air speed was 20 cms. For the concurrent flame spread test, the flame size remained constrained after the ignition transient, which is not the case in 1-g. These results were qualitatively different from those on earth where an upward-spreading flame on a sample of this size accelerates and grows. In addition, a curious effect of the chamber size is noted. Compared to previous microgravity work in smaller tunnels, the flame in the larger tunnel spread more slowly, even for a wider sample. This is attributed to the effect of flow acceleration in the smaller tunnels as a result of hot gas expansion. These results clearly demonstrate the unique features of purely forced flow in microgravity on flame spread, the dependence of flame behavior on the scale of the experiment, and the importance of full-scale testing for spacecraft fire safety.

  12. Maximizing the reliability of genomic selection by optimizing the calibration set of reference individuals: comparison of methods in two diverse groups of maize inbreds (Zea mays L.).

    PubMed

    Rincent, R; Laloë, D; Nicolas, S; Altmann, T; Brunel, D; Revilla, P; Rodríguez, V M; Moreno-Gonzalez, J; Melchinger, A; Bauer, E; Schoen, C-C; Meyer, N; Giauffret, C; Bauland, C; Jamin, P; Laborde, J; Monod, H; Flament, P; Charcosset, A; Moreau, L

    2012-10-01

    Genomic selection refers to the use of genotypic information for predicting breeding values of selection candidates. A prediction formula is calibrated with the genotypes and phenotypes of reference individuals constituting the calibration set. The size and the composition of this set are essential parameters affecting the prediction reliabilities. The objective of this study was to maximize reliabilities by optimizing the calibration set. Different criteria based on the diversity or on the prediction error variance (PEV) derived from the realized additive relationship matrix-best linear unbiased predictions model (RA-BLUP) were used to select the reference individuals. For the latter, we considered the mean of the PEV of the contrasts between each selection candidate and the mean of the population (PEVmean) and the mean of the expected reliabilities of the same contrasts (CDmean). These criteria were tested with phenotypic data collected on two diversity panels of maize (Zea mays L.) genotyped with a 50k SNPs array. In the two panels, samples chosen based on CDmean gave higher reliabilities than random samples for various calibration set sizes. CDmean also appeared superior to PEVmean, which can be explained by the fact that it takes into account the reduction of variance due to the relatedness between individuals. Selected samples were close to optimality for a wide range of trait heritabilities, which suggests that the strategy presented here can efficiently sample subsets in panels of inbred lines. A script to optimize reference samples based on CDmean is available on request.

  13. Neurocognitive performance in family-based and case-control studies of schizophrenia.

    PubMed

    Gur, Ruben C; Braff, David L; Calkins, Monica E; Dobie, Dorcas J; Freedman, Robert; Green, Michael F; Greenwood, Tiffany A; Lazzeroni, Laura C; Light, Gregory A; Nuechterlein, Keith H; Olincy, Ann; Radant, Allen D; Seidman, Larry J; Siever, Larry J; Silverman, Jeremy M; Sprock, Joyce; Stone, William S; Sugar, Catherine A; Swerdlow, Neal R; Tsuang, Debby W; Tsuang, Ming T; Turetsky, Bruce I; Gur, Raquel E

    2015-04-01

    Neurocognitive deficits in schizophrenia (SZ) are established and the Consortium on the Genetics of Schizophrenia (COGS) investigated such measures as endophenotypes in family-based (COGS-1) and case-control (COGS-2) studies. By requiring family participation, family-based sampling may result in samples that vary demographically and perform better on neurocognitive measures. The Penn computerized neurocognitive battery (CNB) evaluates accuracy and speed of performance for several domains and was administered across sites in COGS-1 and COGS-2. Most tests were included in both studies. COGS-1 included 328 patients with SZ and 497 healthy comparison subjects (HCS) and COGS-2 included 1195 patients and 1009 HCS. Demographically, COGS-1 participants were younger, more educated, with more educated parents and higher estimated IQ compared to COGS-2 participants. After controlling for demographics, the two samples produced very similar performance profiles compared to their respective controls. As expected, performance was better and with smaller effect sizes compared to controls in COGS-1 relative to COGS-2. Better performance was most pronounced for spatial processing while emotion identification had large effect sizes for both accuracy and speed in both samples. Performance was positively correlated with functioning and negatively with negative and positive symptoms in both samples, but correlations were attenuated in COGS-2, especially with positive symptoms. Patients ascertained through family-based design have more favorable demographics and better performance on some neurocognitive domains. Thus, studies that use case-control ascertainment may tap into populations with more severe forms of illness that are exposed to less favorable factors compared to those ascertained with family-based designs.

  14. Global Particle Size Distributions: Measurements during the Atmospheric Tomography (ATom) Project

    NASA Astrophysics Data System (ADS)

    Brock, C. A.; Williamson, C.; Kupc, A.; Froyd, K. D.; Richardson, M.; Weinzierl, B.; Dollner, M.; Schuh, H.; Erdesz, F.

    2016-12-01

    The Atmospheric Tomography (ATom) project is a three-year NASA-sponsored program to map the spatial and temporal distribution of greenhouse gases, reactive species, and aerosol particles from the Arctic to the Antarctic. In situ measurements are being made on the NASA DC-8 research aircraft, which will make four global circumnavigations of the Earth over the mid-Pacific and mid-Atlantic Oceans while continuously profiling between 0.2 and 13 km altitude. In situ microphysical measurements will provide an unique and unprecedented dataset of aerosol particle size distributions between 0.004 and 50 µm diameter. This unbiased, representative dataset allows investigation of new particle formation in the remote troposphere, placing strong observational constraints on the chemical and physical mechanisms that govern particle formation and growth to cloud-active sizes. Particles from 0.004 to 0.055 µm are measured with 10 condensation particle counters. Particles with diameters from 0.06 to 1.0 µm are measured with one-second resolution using two ultra-high sensitivity aerosol size spectrometers (UHSASes). A laser aerosol spectrometer (LAS) measures particle size distributions between 0.12 and 10 µm in diameter. Finally, a cloud, aerosol and precipitation spectrometer (CAPS) underwing optical spectrometer probe sizes ambient particles with diameters from 0.5 to 50 µm and images and sizes precipitation-sized particles. Additional particle instruments on the payload include a high-resolution time-of-flight aerosol mass spectrometer and a single particle laser-ablation aerosol mass spectrometer. The instruments are calibrated in the laboratory and on the aircraft. Calibrations are checked in flight by introducing four sizes of polystyrene latex (PSL) microspheres into the sampling inlet. The CAPS probe is calibrated using PSL and glass microspheres that are aspirated into the sample volume. Comparisons between the instruments and checks with the calibration aerosol indicate flight performance within uncertainties expected from laboratory calibrations. Analysis of data from the first ATom circuit in August 2016 shows high concentrations of newly formed particles in the tropical middle and upper troposphere and Arctic lower troposphere.

  15. Incentive payments are not related to expected health gain in the pay for performance scheme for UK primary care: cross-sectional analysis

    PubMed Central

    2012-01-01

    Background The General Medical Services primary care contract for the United Kingdom financially rewards performance in 19 clinical areas, through the Quality and Outcomes Framework. Little is known about how best to determine the size of financial incentives in pay for performance schemes. Our aim was to test the hypothesis that performance indicators with larger population health benefits receive larger financial incentives. Methods We performed cross sectional analyses to quantify associations between the size of financial incentives and expected health gain in the 2004 and 2006 versions of the Quality and Outcomes Framework. We used non-parametric two-sided Spearman rank correlation tests. Health gain was measured in expected lives saved in one year and in quality adjusted life years. For each quality indicator in an average sized general practice we tested for associations first, between the marginal increase in payment and the health gain resulting from a one percent point improvement in performance and second, between total payment and the health gain at the performance threshold for maximum payment. Results Evidence for lives saved or quality adjusted life years gained was found for 28 indicators accounting for 41% of the total incentive payments. No statistically significant associations were found between the expected health gain and incentive gained from a marginal 1% increase in performance in either the 2004 or 2006 version of the Quality and Outcomes Framework. In addition no associations were found between the size of financial payment for achievement of an indicator and the expected health gain at the performance threshold for maximum payment measured in lives saved or quality adjusted life years. Conclusions In this subgroup of indicators the financial incentives were not aligned to maximise health gain. This disconnection between incentive and expected health gain risks supporting clinical activities that are only marginally effective, at the expense of more effective activities receiving lower incentives. When designing pay for performance programmes decisions about the size of the financial incentive attached to an indicator should be informed by information on the health gain to be expected from that indicator. PMID:22507660

  16. Misclassification Errors in Unsupervised Classification Methods. Comparison Based on the Simulation of Targeted Proteomics Data

    PubMed Central

    Andreev, Victor P; Gillespie, Brenda W; Helfand, Brian T; Merion, Robert M

    2016-01-01

    Unsupervised classification methods are gaining acceptance in omics studies of complex common diseases, which are often vaguely defined and are likely the collections of disease subtypes. Unsupervised classification based on the molecular signatures identified in omics studies have the potential to reflect molecular mechanisms of the subtypes of the disease and to lead to more targeted and successful interventions for the identified subtypes. Multiple classification algorithms exist but none is ideal for all types of data. Importantly, there are no established methods to estimate sample size in unsupervised classification (unlike power analysis in hypothesis testing). Therefore, we developed a simulation approach allowing comparison of misclassification errors and estimating the required sample size for a given effect size, number, and correlation matrix of the differentially abundant proteins in targeted proteomics studies. All the experiments were performed in silico. The simulated data imitated the expected one from the study of the plasma of patients with lower urinary tract dysfunction with the aptamer proteomics assay Somascan (SomaLogic Inc, Boulder, CO), which targeted 1129 proteins, including 330 involved in inflammation, 180 in stress response, 80 in aging, etc. Three popular clustering methods (hierarchical, k-means, and k-medoids) were compared. K-means clustering performed much better for the simulated data than the other two methods and enabled classification with misclassification error below 5% in the simulated cohort of 100 patients based on the molecular signatures of 40 differentially abundant proteins (effect size 1.5) from among the 1129-protein panel. PMID:27524871

  17. Dietary Correlates of Primate Masticatory Muscle Fiber Architecture.

    PubMed

    Hartstone-Rose, Adam; Deutsch, Ashley R; Leischner, Carissa L; Pastor, Francisco

    2018-02-01

    Analyses of masticatory muscle architecture-specifically fascicle length (FL; a correlate of muscle stretch and contraction speed) and physiological cross-sectional area (PCSA; a correlate of force)-reveal soft-tissue dietary adaptations. For instance, consumers of large, soft foods are expected to have relatively long FL, while consumers of obdurate foods are expected to have relatively high PCSA. Unfortunately, only a few studies have analyzed these variables across large primate samples-an order of particular interest because it is our own. Previous studies found that, in strepsirrhines, force variables (PCSA and muscle masses; MM) scale with isometry or slight positive allometry, while the body size corrected FL residuals correlate with food sizes. However, a study of platyrrhines using different methods (in which the authors physically cut muscles between fascicles) found very different trends: negative allometry for both the stretch and force variables. Here, we apply the methods used in the strepsirrhine study (chemical dissection of fascicles to ensure full length measurements) to reevaluate these trends in platyrrhines and extend this research to include catarrhines. Our results conform to the previous strepsirrhine trends: there is no evidence of negative allometry in platyrrhines. Rather, in primates broadly and catarrhines specifically, MM and PCSA scale with isometry or positive allometry. When examining size-adjusted variables, it is clear that fascicle lengths (especially those of the temporalis muscle) correlate with diet: species that consume soft, larger, foods have longer masticatory fiber lengths which would allow them to open their jaws to wider gape angles. Anat Rec, 301:311-324, 2018. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  18. Carbon isotope evidence for a magmatic origin for Archaean gold-quartz vein ore deposits

    NASA Technical Reports Server (NTRS)

    Burrows, D. R.; Wood, P. C.; Spooner, E. T. C.

    1986-01-01

    Sediments from three sites in the Santa Barbara Basin were examined with a 160X power light microscope and TEM equipment to characterize the magnetostatic bacteria (MB) in the samples. Both the free magnetite and the crystals in the MB in the samples had lengths from 40-60 nm in length and increased in size from one end to the next. An intact magnetosome was also observed. Scanning the sediments with saturation isothermal remanent magnetization (SIRM) and altering field demagnetization techniques using a SQUID magnetometer yielded coercivity spectra which showed that the primary remanence carrier in the sediments was single domain magnetite. Although it is expected that the predominance of the bacterial magnetite component will decrease with depth in the open ocean basin, single-domain bacteria as old as 50 Myr have been observed in oceanic sediments.

  19. A mathematical model for maximizing the value of phase 3 drug development portfolios incorporating budget constraints and risk.

    PubMed

    Patel, Nitin R; Ankolekar, Suresh; Antonijevic, Zoran; Rajicic, Natasa

    2013-05-10

    We describe a value-driven approach to optimizing pharmaceutical portfolios. Our approach incorporates inputs from research and development and commercial functions by simultaneously addressing internal and external factors. This approach differentiates itself from current practices in that it recognizes the impact of study design parameters, sample size in particular, on the portfolio value. We develop an integer programming (IP) model as the basis for Bayesian decision analysis to optimize phase 3 development portfolios using expected net present value as the criterion. We show how this framework can be used to determine optimal sample sizes and trial schedules to maximize the value of a portfolio under budget constraints. We then illustrate the remarkable flexibility of the IP model to answer a variety of 'what-if' questions that reflect situations that arise in practice. We extend the IP model to a stochastic IP model to incorporate uncertainty in the availability of drugs from earlier development phases for phase 3 development in the future. We show how to use stochastic IP to re-optimize the portfolio development strategy over time as new information accumulates and budget changes occur. Copyright © 2013 John Wiley & Sons, Ltd.

  20. Protein variants in Hiroshima and Nagasaki: tales of two cities.

    PubMed Central

    Neel, J V; Satoh, C; Smouse, P; Asakawa, J; Takahashi, N; Goriki, K; Fujita, M; Kageoka, T; Hazama, R

    1988-01-01

    The results of 1,465,423 allele product determinations based on blood samples from Hiroshima and Nagasaki, involving 30 different proteins representing 32 different gene products, are analyzed in a variety of ways, with the following conclusions: (1) Sibships and their parents are included in the sample. Our analysis reveals that statistical procedures designed to reduce the sample to equivalent independent genomes do not in population comparisons compensate for the familial cluster effect of rare variants. Accordingly, the data set was reduced to one representative of each sibship (937,427 allele products). (2) Both chi 2-type contrasts and a genetic distance measure (delta) reveal that rare variants (P less than .01) are collectively as effective as polymorphisms in establishing genetic differences between the two cities. (3) We suggest that rare variants that individually exhibit significant intercity differences are probably the legacy of tribal private polymorphisms that occurred during prehistoric times. (4) Despite the great differences in the known histories of the two cities, both the overall frequency of rare variants and the number of different rare variants are essentially identical in the two cities. (5) The well-known differences in locus variability are confirmed, now after adjustment for sample size differences for the various locus products; in this large series we failed to detect variants at only three of 29 loci for which sample size exceeded 23,000. (6) The number of alleles identified per locus correlates positively with subunit molecular weight. (7) Loci supporting genetic polymorphisms are characterized by more rare variants than are loci at which polymorphisms were not encountered. (8) Loci whose products do not appear to be essential for health support more variants than do loci the absence of whose product is detrimental to health. (9) There is a striking excess of rare variants over the expectation under the neutral mutation/drift/equilibrium theory. We suggest that this finding is primarily due to the relatively recent (in genetic time) agglomeration of previously separated tribal populations; efforts to test for agreement with the expectations of this theory by using data from modern cosmopolitan populations are exercises in futility. (10) All of these findings should characterize DNA variants in exons as more data become available, since the finding are the protein expression of such variants. PMID:3195587

  1. Protein variants in Hiroshima and Nagasaki: tales of two cities.

    PubMed

    Neel, J V; Satoh, C; Smouse, P; Asakawa, J; Takahashi, N; Goriki, K; Fujita, M; Kageoka, T; Hazama, R

    1988-12-01

    The results of 1,465,423 allele product determinations based on blood samples from Hiroshima and Nagasaki, involving 30 different proteins representing 32 different gene products, are analyzed in a variety of ways, with the following conclusions: (1) Sibships and their parents are included in the sample. Our analysis reveals that statistical procedures designed to reduce the sample to equivalent independent genomes do not in population comparisons compensate for the familial cluster effect of rare variants. Accordingly, the data set was reduced to one representative of each sibship (937,427 allele products). (2) Both chi 2-type contrasts and a genetic distance measure (delta) reveal that rare variants (P less than .01) are collectively as effective as polymorphisms in establishing genetic differences between the two cities. (3) We suggest that rare variants that individually exhibit significant intercity differences are probably the legacy of tribal private polymorphisms that occurred during prehistoric times. (4) Despite the great differences in the known histories of the two cities, both the overall frequency of rare variants and the number of different rare variants are essentially identical in the two cities. (5) The well-known differences in locus variability are confirmed, now after adjustment for sample size differences for the various locus products; in this large series we failed to detect variants at only three of 29 loci for which sample size exceeded 23,000. (6) The number of alleles identified per locus correlates positively with subunit molecular weight. (7) Loci supporting genetic polymorphisms are characterized by more rare variants than are loci at which polymorphisms were not encountered. (8) Loci whose products do not appear to be essential for health support more variants than do loci the absence of whose product is detrimental to health. (9) There is a striking excess of rare variants over the expectation under the neutral mutation/drift/equilibrium theory. We suggest that this finding is primarily due to the relatively recent (in genetic time) agglomeration of previously separated tribal populations; efforts to test for agreement with the expectations of this theory by using data from modern cosmopolitan populations are exercises in futility. (10) All of these findings should characterize DNA variants in exons as more data become available, since the finding are the protein expression of such variants.

  2. The Iodine Status of Queensland Preschool Children After the Introduction of Mandatory Iodine Fortification in Bread: An Exploratory Study Using a Convenience Sample.

    PubMed

    Samidurai, A J; Ware, R S; Davies, P S W

    2017-01-01

    Introduction Appropriate dietary iodine is essential for thyroid hormone synthesis, especially in young children. Following an iodine fortification in bread initiative, approximately 6 % of Australian preschool children were expected to have an excessive iodine status. The aim of this study was to document the current iodine status of preschool children using urinary iodine concentration (UIC) as a biomarker of iodine intake. Methods A convenience sample of fifty-one preschool children, aged 2-3 years, were recruited from south east Queensland. UIC was ascertained from spot morning and afternoon urine samples collected on two consecutive days and food frequency questionnaires were completed for each participant. Dietary iodine intake was extrapolated from UIC assuming 90 % of dietary iodine is excreted in urine and a urine volume of 0.5 L/day. Results A median UIC of 223.3 μg/L was found. The calculated median dietary iodine intake was 124.8 μg/day (SD 47.0) with 9.8 % of samples above the upper level of 200 μg for dietary iodine for children within this age group. No foods were associated with UIC. Discussion Limited by sample size and recruitment strategies, no association was found between usual food intake and UIC. Extrapolated dietary iodine intake indicated that children within this cohort consumed adequate amounts of dietary iodine, although the number of children consuming above the upper limit of 300 μg/day was almost double of expected. The development of a UIC criteria to assess appropriate parameters for varying degrees of iodine status is required for the monitoring of iodine nutrition in this vulnerable age group.

  3. The SAMPL4 host-guest blind prediction challenge: an overview.

    PubMed

    Muddana, Hari S; Fenley, Andrew T; Mobley, David L; Gilson, Michael K

    2014-04-01

    Prospective validation of methods for computing binding affinities can help assess their predictive power and thus set reasonable expectations for their performance in drug design applications. Supramolecular host-guest systems are excellent model systems for testing such affinity prediction methods, because their small size and limited conformational flexibility, relative to proteins, allows higher throughput and better numerical convergence. The SAMPL4 prediction challenge therefore included a series of host-guest systems, based on two hosts, cucurbit[7]uril and octa-acid. Binding affinities in aqueous solution were measured experimentally for a total of 23 guest molecules. Participants submitted 35 sets of computational predictions for these host-guest systems, based on methods ranging from simple docking, to extensive free energy simulations, to quantum mechanical calculations. Over half of the predictions provided better correlations with experiment than two simple null models, but most methods underperformed the null models in terms of root mean squared error and linear regression slope. Interestingly, the overall performance across all SAMPL4 submissions was similar to that for the prior SAMPL3 host-guest challenge, although the experimentalists took steps to simplify the current challenge. While some methods performed fairly consistently across both hosts, no single approach emerged as consistent top performer, and the nonsystematic nature of the various submissions made it impossible to draw definitive conclusions regarding the best choices of energy models or sampling algorithms. Salt effects emerged as an issue in the calculation of absolute binding affinities of cucurbit[7]uril-guest systems, but were not expected to affect the relative affinities significantly. Useful directions for future rounds of the challenge might involve encouraging participants to carry out some calculations that replicate each others' studies, and to systematically explore parameter options.

  4. Determining suspended sediment particle size information from acoustical and optical backscatter measurements

    NASA Astrophysics Data System (ADS)

    Lynch, James F.; Irish, James D.; Sherwood, Christopher R.; Agrawal, Yogesh C.

    1994-08-01

    During the winter of 1990-1991 an Acoustic BackScatter System (ABSS), five Optical Backscatterance Sensors (OBSs) and a Laser In Situ Settling Tube (LISST) were deployed in 90 m of water off the California coast for 3 months as part of the Sediment Transport Events on Shelves and Slopes (STRESS) experiment. By looking at sediment transport events with both optical (OBS) and acoustic (ABSS) sensors, one obtains information about the size of the particles transported as well as their concentration. Specifically, we employ two different methods of estimating "average particle size". First, we use vertical scattering intensity profile slopes (acoustical and optical) to infer average particle size using a Rouse profile model of the boundary layer and a Stokes law fall velocity assumption. Secondly, we use a combination of optics and acoustics to form a multifrequency (two frequency) inverse for the average particle size. These results are compared to independent observations from the LISST instrument, which measures the particle size spectrum in situ using laser diffraction techniques. Rouse profile based inversions for particle size are found to be in good agreement with the LISST results except during periods of transport event initiation, when the Rouse profile is not expected to be valid. The two frequency inverse, which is boundary layer model independent, worked reasonably during all periods, with average particle sizes correlating well with the LISST estimates. In order to further corroborate the particle size inverses from the acoustical and optical instruments, we also examined size spectra obtained from in situ sediment grab samples and water column samples (suspended sediments), as well as laboratory tank experiments using STRESS sediments. Again, good agreement is noted. The laboratory tank experiment also allowed us to study the acoustical and optical scattering law characteristics of the STRESS sediments. It is seen that, for optics, using the cross sectional area of an equivalent sphere is a very good first approximation whereas for acoustics, which is most sensitive in the region ka ˜ 1, the particle volume itself is best sensed. In concluding, we briefly interpret the history of some STRESS transport events in light of the size distribution and other information available. For one of the events "anomalous" suspended particle size distributions are noted, i.e. larger particles are seen suspended before finer ones. Speculative hypotheses for why this signature is observed are presented.

  5. Particle Size-Selective Assessment of Protection of European Standard FFP Respirators and Surgical Masks against Particles-Tested with Human Subjects

    PubMed Central

    Hwang, Dong-Chir; Li, He-Yi; Tsai, Chieh-Fu; Chen, Chun-Wan; Chen, Jen-Kun

    2016-01-01

    This study was conducted to investigate the protection of disposable filtering half-facepiece respirators of different grades against particles between 0.093 and 1.61 μm. A personal sampling system was used to particle size-selectively assess the protection of respirators. The results show that about 10.9% of FFP2 respirators and 28.2% of FFP3 respirators demonstrate assigned protection factors (APFs) below 10 and 20, which are the levels assigned for these respirators by the British Standard. On average, the protection factors of FFP respirators were 11.5 to 15.9 times greater than those of surgical masks. The minimum protection factors (PFs) were observed for particles between 0.263 and 0.384 μm. No significant difference in PF results was found among FFP respirator categories and particle size. A strong association between fit factors and protection factors was found. The study indicates that FFP respirators may not achieve the expected protection level and the APFs may need to be revised for these classes of respirators. PMID:27195721

  6. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    PubMed

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  7. Marijuana Primes, Marijuana Expectancies, and Arithmetic Efficiency*

    PubMed Central

    Hicks, Joshua A.; Pedersen, Sarah L.; McCarthy, Denis M.; Friedman, Ronald S.

    2009-01-01

    Objective: Previous research has shown that primes associated with alcohol influence behavior consistent with specific alcohol expectancies. The present study examined whether exposure to marijuana-related primes and marijuana expectancies interact to produce similar effects. Specifically, the present study examined whether marijuana primes and marijuana expectancies regarding cognitive and behavioral impairment interact to influence performance on an arithmetic task. Method: Two independent samples (N = 260) of undergraduate students (both marijuana users and nonusers) first completed measures of marijuana-outcome expectancies associated with cognitive and behavioral impairment and with general negative effects (Sample 2). Later in the semester, participants were exposed to marijuana-related (or neutral) primes and then completed an arithmetic task. Results: Results from Sample 1 indicated that participants who were exposed to marijuana-themed magazine covers performed more poorly on the arithmetic task if they expected that marijuana would lead to cognitive and behavioral impairment. Results from Sample 2 indicated that, for marijuana users, cognitive and behavioral impairment expectancies, but not expectancies regarding general negative effects, similarly moderated arithmetic performance for participants exposed to marijuana-related words. Conclusions: Results support the hypothesis that the implicit activation of specific marijuana-outcome expectancies can influence cognitive processes. Implications for research on marijuana are discussed. PMID:19371490

  8. Asteroid Impact Deflection and Assessment (AIDA) mission - Full-Scale Modeling and Simulation of Ejecta Evolution and Fates

    NASA Astrophysics Data System (ADS)

    Fahnestock, Eugene G.; Yu, Yang; Hamilton, Douglas P.; Schwartz, Stephen; Stickle, Angela; Miller, Paul L.; Cheng, Andy F.; Michel, Patrick; AIDA Impact Simulation Working Group

    2016-10-01

    The proposed Asteroid Impact Deflection and Assessment (AIDA) mission includes NASA's Double Asteroid Redirection Test (DART), whose impact with the secondary of near-Earth binary asteroid 65803 Didymos is expected to liberate large amounts of ejecta. We present efforts within the AIDA Impact Simulation Working Group to comprehensively simulate the behavior of this impact ejecta as it moves through and exits the system. Group members at JPL, OCA, and UMD have been working largely independently, developing their own strategies and methodologies. Ejecta initial conditions may be imported from output of hydrocode impact simulations or generated from crater scaling laws derived from point-source explosion models. We started with the latter approach, using reasonable assumptions for the secondary's density, porosity, surface cohesive strength, and vanishingly small net gravitational/rotational surface acceleration. We adopted DART's planned size, mass, closing velocity, and impact geometry for the cratering event. Using independent N-Body codes, we performed Monte Carlo integration of ejecta particles sampled over reasonable particle size ranges, and over launch locations within the crater footprint. In some cases we scaled the number of integrated particles in various size bins to the estimated number of particles consistent with a realistic size-frequency distribution. Dynamical models used for the particle integration varied, but all included full gravity potential of both primary and secondary, the solar tide, and solar radiation pressure (accounting for shadowing). We present results for the proportions of ejecta reaching ultimate fates of escape, return impact on the secondary, and transfer impact onto the primary. We also present the time history of reaching those outcomes, i.e., ejecta clearing timescales, and the size-frequency distribution of remaining ejecta at given post-impact durations. We find large numbers of particles remain in the system for several weeks after impact. Clearing timescales are nonlinearly dependent on particle size as expected, such that only the largest ejecta persist longest. We find results are strongly dependent on the local surface geometry at the modeled impact locations.

  9. Analysis of Gap in Service Quality in Drug Addiction Treatment Centers of Kerman, Iran, Using SERVQUAL Model.

    PubMed

    Naqavi, Mohammad Reza; Refaiee, Raheleh; Baneshi, Mohammad Reza; Nakhaee, Nouzar

    2014-01-01

    Treatment of drug addicts is one of the main strategies of drug control in Iran. Client satisfaction strongly influences the success of any treatment program. This study aimed to explore the difference between customer expectations and perceptions in drug addiction treatment centers of Kerman, Iran, using SERVQUAL model. Using a cross-sectional design 260 clients referring to drug addiction treatment centers of Kerman, were enrolled in 2012. From among 84 clinics, 20 centers were selected randomly. Based on the number of clients registered in each center, a random sample proportional to the size was selected and 290 subjects were invited for interviews. A well validated 22-item questionnaire, which measured the 5 dimensions of service quality (reliability, assurance, tangibility, empathy, and responsiveness), was completed by participants. Each item measured 2 aspects of service quality; expectations and perceptions. Mean ± SD (Standard deviation) age of the subjects was 37.7 ± 9.4. Most of them were male (87.7%). Less than half of them had an educational level lower than diploma. The total score of clients` expectations was higher than their perceptions (P < 0.001). Considering the 5 dimensions of the SERVQUAL model, only 1 dimension (i.e., assurance) showed no difference between perceptions and expectations of the participants (P = 0.134). There was a gap between the clients' expectations and what they actually perceived in the clinics. Thus, more attention should be devoted to the clients' views regarding service quality in addiction treatment clinics.

  10. Relieving patients' pain with expectation interventions: a meta-analysis.

    PubMed

    Peerdeman, Kaya J; van Laarhoven, Antoinette I M; Keij, Sascha M; Vase, Lene; Rovers, Maroeska M; Peters, Madelon L; Evers, Andrea W M

    2016-06-01

    Patients' expectations are important predictors of the outcome of analgesic treatments, as demonstrated predominantly in research on placebo effects. Three commonly investigated interventions that have been found to induce expectations (verbal suggestion, conditioning, and mental imagery) entail promising, brief, and easy-to-implement adjunctive procedures for optimizing the effectiveness of analgesic treatments. However, evidence for their efficacy stems mostly from research on experimentally evoked pain in healthy samples, and these findings might not be directly transferable to clinical populations. The current meta-analysis investigated the effects of these expectation inductions on patients' pain relief. Five bibliographic databases were systematically searched for studies that assessed the effects of brief verbal suggestion, conditioning, or imagery interventions on pain in clinical populations, with patients experiencing experimental, acute procedural, or chronic pain, compared with no treatment or control treatment. Of the 15,955 studies retrieved, 30 met the inclusion criteria, of which 27 provided sufficient data for quantitative analyses. Overall, a medium-sized effect of the interventions on patients' pain relief was observed (Hedges g = 0.61, I = 73%), with varying effects of verbal suggestion (k = 18, g = 0.75), conditioning (always paired with verbal suggestion, k = 3, g = 0.65), and imagery (k = 6, g = 0.27). Subset analyses indicated medium to large effects on experimental and acute procedural pain and small effects on chronic pain. In conclusion, patients' pain can be relieved with expectation interventions; particularly, verbal suggestion for acute procedural pain was found to be effective.

  11. Self-organized Segregation on the Grid

    NASA Astrophysics Data System (ADS)

    Omidvar, Hamed; Franceschetti, Massimo

    2018-02-01

    We consider an agent-based model with exponentially distributed waiting times in which two types of agents interact locally over a graph, and based on this interaction and on the value of a common intolerance threshold τ , decide whether to change their types. This is equivalent to a zero-temperature ising model with Glauber dynamics, an asynchronous cellular automaton with extended Moore neighborhoods, or a Schelling model of self-organized segregation in an open system, and has applications in the analysis of social and biological networks, and spin glasses systems. Some rigorous results were recently obtained in the theoretical computer science literature, and this work provides several extensions. We enlarge the intolerance interval leading to the expected formation of large segregated regions of agents of a single type from the known size ɛ >0 to size ≈ 0.134. Namely, we show that for 0.433< τ < 1/2 (and by symmetry 1/2<τ <0.567), the expected size of the largest segregated region containing an arbitrary agent is exponential in the size of the neighborhood. We further extend the interval leading to expected large segregated regions to size ≈ 0.312 considering "almost segregated" regions, namely regions where the ratio of the number of agents of one type and the number of agents of the other type vanishes quickly as the size of the neighborhood grows. In this case, we show that for 0.344 < τ ≤ 0.433 (and by symmetry for 0.567 ≤ τ <0.656) the expected size of the largest almost segregated region containing an arbitrary agent is exponential in the size of the neighborhood. This behavior is reminiscent of supercritical percolation, where small clusters of empty sites can be observed within any sufficiently large region of the occupied percolation cluster. The exponential bounds that we provide also imply that complete segregation, where agents of a single type cover the whole grid, does not occur with high probability for p=1/2 and the range of intolerance considered.

  12. Deterministic assembly of land snail communities according to species size and diet.

    PubMed

    Schamp, Brandon; Horsák, Michal; Hájek, Michal

    2010-07-01

    1. We investigated whether coexisting snail species in 145 treeless fen communities in the Western Carpathian Mountains differed more in size and diet than would be expected by chance, as predicted for traits commonly associated with competition and differential resource acquisition under limiting similarity theory. 2. Contrary to expectations, coexisting snail species were no more different in body size than expected by chance under a null model. However, variation in body size played a significant role in structuring snail communities: coexisting snail species were significantly more similar with respect to body size. 3. We developed two new test statistics to expand our investigation of limiting similarity to include diet, a nominal trait. We tested whether communities of snails were characterized by a greater richness of diet, and whether different diets were represented more or less evenly within communities. Communities of snails were significantly less evenly distributed than expected by chance, with detritivores being over-represented relative to predatory strategies. 4. We also examined the effect of water pH and conductivity, herbaceous cover, and bryophyte and vascular plant richness, on these trends by examining how the effect size of our tests varied across these gradients. Convergence in species size increased with increasing habitat pH. Specifically, smaller snail species were over-represented in fen communities in general, and this effect was accentuated in increasingly calcareous fens. 5. Theory predicts that traits related strongly to environmental conditions are more likely to be convergent. Our findings support this suggestion, as small snail species have an advantage in tolerating freezing conditions over winter when refuges are limited. 6. These results add to the growing body of literature demonstrating that variation in body size and diet play a strong role in structuring communities, although frequently in ways not predicted by limiting similarity theory. Finally, our results increase our understanding of how species are assembled non-randomly into communities with respect to important traits.

  13. Undersampling power-law size distributions: effect on the assessment of extreme natural hazards

    USGS Publications Warehouse

    Geist, Eric L.; Parsons, Thomas E.

    2014-01-01

    The effect of undersampling on estimating the size of extreme natural hazards from historical data is examined. Tests using synthetic catalogs indicate that the tail of an empirical size distribution sampled from a pure Pareto probability distribution can range from having one-to-several unusually large events to appearing depleted, relative to the parent distribution. Both of these effects are artifacts caused by limited catalog length. It is more difficult to diagnose the artificially depleted empirical distributions, since one expects that a pure Pareto distribution is physically limited in some way. Using maximum likelihood methods and the method of moments, we estimate the power-law exponent and the corner size parameter of tapered Pareto distributions for several natural hazard examples: tsunamis, floods, and earthquakes. Each of these examples has varying catalog lengths and measurement thresholds, relative to the largest event sizes. In many cases where there are only several orders of magnitude between the measurement threshold and the largest events, joint two-parameter estimation techniques are necessary to account for estimation dependence between the power-law scaling exponent and the corner size parameter. Results indicate that whereas the corner size parameter of a tapered Pareto distribution can be estimated, its upper confidence bound cannot be determined and the estimate itself is often unstable with time. Correspondingly, one cannot statistically reject a pure Pareto null hypothesis using natural hazard catalog data. Although physical limits to the hazard source size and by attenuation mechanisms from source to site constrain the maximum hazard size, historical data alone often cannot reliably determine the corner size parameter. Probabilistic assessments incorporating theoretical constraints on source size and propagation effects are preferred over deterministic assessments of extreme natural hazards based on historic data.

  14. Selective sensitivity of Mueller imaging for tissue scattering over absorption changes in cancer mimicking phantoms

    NASA Astrophysics Data System (ADS)

    Fathima, Adeeba; Sharma B. S., Mahima; N., Sujatha

    2018-03-01

    Tissue characterization using optical polarimetry, especially Mueller imaging is receiving sustained interest due to its potential in achieving optical contrast between normal and malignant variations. This is particularly important in identifying the margin of malignant growth in suspected tissue regions for accurate surgical removal, or in aiding the sampling procedure during biopsy. The sensitivity of Mueller matrix derived depolarization index to the combined effects of changes in scattering and absorption occurring in a cancerous growth is illustrated in this study. Depolarization imaging is shown to be useful in demarcating the boundary of two regions of differing optical properties using a tissue phantom, modeled according to the changes expected during cancerous growth in tissue. Tissue scattering and absorption are expected to generally increase with the nuclear size change and crowding as well as angiogenesis associated with malignancy. We have observed that there is selective sensitivity for the Mueller elements and derived depolarization index to tissue scattering over absorption in the object field. Although the scattering and absorption are expected to increase and decrease depolarization respectively, the optical contrast of Mueller images and the derived depolarization index between normal and cancerous tissue is found appreciable in this region.

  15. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    PubMed Central

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  16. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

    PubMed

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

  17. Wave transmission approach based on modal analysis for embedded mechanical systems

    NASA Astrophysics Data System (ADS)

    Cretu, Nicolae; Nita, Gelu; Ioan Pop, Mihail

    2013-09-01

    An experimental method for determining the phase velocity in small solid samples is proposed. The method is based on measuring the resonant frequencies of a binary or ternary solid elastic system comprising the small sample of interest and a gauge material of manageable size. The wave transmission matrix of the combined system is derived and the theoretical values of its eigenvalues are used to determine the expected eigenfrequencies that, equated with the measured values, allow for the numerical estimation of the phase velocities in both materials. The known phase velocity of the gauge material is then used to asses the accuracy of the method. Using computer simulation and the experimental values for phase velocities, the theoretical values for the eigenfrequencies of the eigenmodes of the embedded elastic system are obtained, to validate the method. We conclude that the proposed experimental method may be reliably used to determine the elastic properties of small solid samples whose geometries do not allow a direct measurement of their resonant frequencies.

  18. Reliability generalization study of the Yale-Brown Obsessive-Compulsive Scale for children and adolescents.

    PubMed

    López-Pina, José Antonio; Sánchez-Meca, Julio; López-López, José Antonio; Marín-Martínez, Fulgencio; Núñez-Núñez, Rosa Ma; Rosa-Alcázar, Ana I; Gómez-Conesa, Antonia; Ferrer-Requena, Josefa

    2015-01-01

    The Yale-Brown Obsessive-Compulsive Scale for children and adolescents (CY-BOCS) is a frequently applied test to assess obsessive-compulsive symptoms. We conducted a reliability generalization meta-analysis on the CY-BOCS to estimate the average reliability, search for reliability moderators, and propose a predictive model that researchers and clinicians can use to estimate the expected reliability of the CY-BOCS scores. A total of 47 studies reporting a reliability coefficient with the data at hand were included in the meta-analysis. The results showed good reliability and a large variability associated to the standard deviation of total scores and sample size.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowder, Stephen V.

    This document outlines a statistical framework for establishing a shelf-life program for components whose performance is measured by the value of a continuous variable such as voltage or function time. The approach applies to both single measurement devices and repeated measurement devices, although additional process control charts may be useful in the case of repeated measurements. The approach is to choose a sample size that protects the margin associated with a particular variable over the life of the component. Deviations from expected performance of the measured variable are detected prior to the complete loss of margin. This ensures the reliabilitymore » of the component over its lifetime.« less

  20. 49 CFR 1244.4 - Sampling of waybills.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 9 2010-10-01 2010-10-01 false Sampling of waybills. 1244.4 Section 1244.4... PROPERTY-RAILROADS § 1244.4 Sampling of waybills. (a) Subject railroads shall file waybill sample... expected sampling rates for the manual system are as follows: Numbers of carloads on waybill Expected...

  1. 49 CFR 1244.4 - Sampling of waybills.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 9 2011-10-01 2011-10-01 false Sampling of waybills. 1244.4 Section 1244.4... PROPERTY-RAILROADS § 1244.4 Sampling of waybills. (a) Subject railroads shall file waybill sample... expected sampling rates for the manual system are as follows: Numbers of carloads on waybill Expected...

  2. Illusory expectations can affect retrieval-monitoring accuracy.

    PubMed

    McDonough, Ian M; Gallo, David A

    2012-03-01

    The present study investigated how expectations, even when illusory, can affect the accuracy of memory decisions. Participants studied words presented in large or small font for subsequent memory tests. Replicating prior work, judgments of learning indicated that participants expected to remember large words better than small words, even though memory for these words was equivalent on a standard test of recognition memory and subjective judgments. Critically, we also included tests that instructed participants to selectively search memory for either large or small words, thereby allowing different memorial expectations to contribute to performance. On these tests we found reduced false recognition when searching memory for large words relative to small words, such that the size illusion paradoxically affected accuracy measures (d' scores) in the absence of actual memory differences. Additional evidence for the role of illusory expectations was that (a) the accuracy effect was obtained only when participants searched memory for the aspect of the stimuli corresponding to illusory expectations (size instead of color) and (b) the accuracy effect was eliminated on a forced-choice test that prevented the influence of memorial expectations. These findings demonstrate the critical role of memorial expectations in the retrieval-monitoring process. 2012 APA, all rights reserved

  3. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    PubMed

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  4. Précis of statistical significance: rationale, validity, and utility.

    PubMed

    Chow, S L

    1998-04-01

    The null-hypothesis significance-test procedure (NHSTP) is defended in the context of the theory-corroboration experiment, as well as the following contrasts: (a) substantive hypotheses versus statistical hypotheses, (b) theory corroboration versus statistical hypothesis testing, (c) theoretical inference versus statistical decision, (d) experiments versus nonexperimental studies, and (e) theory corroboration versus treatment assessment. The null hypothesis can be true because it is the hypothesis that errors are randomly distributed in data. Moreover, the null hypothesis is never used as a categorical proposition. Statistical significance means only that chance influences can be excluded as an explanation of data; it does not identify the nonchance factor responsible. The experimental conclusion is drawn with the inductive principle underlying the experimental design. A chain of deductive arguments gives rise to the theoretical conclusion via the experimental conclusion. The anomalous relationship between statistical significance and the effect size often used to criticize NHSTP is more apparent than real. The absolute size of the effect is not an index of evidential support for the substantive hypothesis. Nor is the effect size, by itself, informative as to the practical importance of the research result. Being a conditional probability, statistical power cannot be the a priori probability of statistical significance. The validity of statistical power is debatable because statistical significance is determined with a single sampling distribution of the test statistic based on H0, whereas it takes two distributions to represent statistical power or effect size. Sample size should not be determined in the mechanical manner envisaged in power analysis. It is inappropriate to criticize NHSTP for nonstatistical reasons. At the same time, neither effect size, nor confidence interval estimate, nor posterior probability can be used to exclude chance as an explanation of data. Neither can any of them fulfill the nonstatistical functions expected of them by critics.

  5. A search for faint high-redshift radio galaxy candidates at 150 MHz

    NASA Astrophysics Data System (ADS)

    Saxena, A.; Jagannathan, P.; Röttgering, H. J. A.; Best, P. N.; Intema, H. T.; Zhang, M.; Duncan, K. J.; Carilli, C. L.; Miley, G. K.

    2018-04-01

    Ultrasteep spectrum (USS) radio sources are good tracers of powerful radio galaxies at z > 2. Identification of even a single bright radio galaxy at z > 6 can be used to detect redshifted 21 cm absorption due to neutral hydrogen in the intervening intergalactic medium. Here we describe a new sample of high-redshift radio galaxy (HzRG) candidates constructed from the TIFR GMRT Sky Survey First Alternative Data Release survey at 150 MHz. We employ USS selection (α ≤ -1.3) in ˜10 000 deg2, in combination with strict size selection and non-detections in all-sky optical and infrared surveys. We apply flux density cuts that probe a unique parameter space in flux density (50 mJy < S150 < 200 mJy) to build a sample of 32 HzRG candidates. Follow-up Karl G. Jansky Very Large Array (VLA) observations at 1.4 GHz with an average beam size of 1.3 arcsec revealed ˜ 48 per cent of sources to have a single radio component. P-band (370 MHz) imaging of 17 of these sources revealed a flattening radio SED for 10 sources at low frequencies, which is expected from compact HzRGs. Two of our sources lie in fields where deeper multiwavelength photometry and ancillary radio data are available and for one of these we find a best-fitting photo-z of 4.8 ± 2.0. The other source has zphot = 1.4 ± 0.1 and a small angular size (3.7 arcsec), which could be associated with an obscured star-forming galaxy or with a `dead' elliptical. One USS radio source not part of the HzRG sample but observed with the VLA none the less is revealed to be a candidate giant radio galaxy with a host galaxy photo-z of 1.8 ± 0.5, indicating a size of 875 kpc.

  6. SAMPL4 & DOCK3.7: lessons for automated docking procedures

    NASA Astrophysics Data System (ADS)

    Coleman, Ryan G.; Sterling, Teague; Weiss, Dahlia R.

    2014-03-01

    The SAMPL4 challenges were used to test current automated methods for solvation energy, virtual screening, pose and affinity prediction of the molecular docking pipeline DOCK 3.7. Additionally, first-order models of binding affinity were proposed as milestones for any method predicting binding affinity. Several important discoveries about the molecular docking software were made during the challenge: (1) Solvation energies of ligands were five-fold worse than any other method used in SAMPL4, including methods that were similarly fast, (2) HIV Integrase is a challenging target, but automated docking on the correct allosteric site performed well in terms of virtual screening and pose prediction (compared to other methods) but affinity prediction, as expected, was very poor, (3) Molecular docking grid sizes can be very important, serious errors were discovered with default settings that have been adjusted for all future work. Overall, lessons from SAMPL4 suggest many changes to molecular docking tools, not just DOCK 3.7, that could improve the state of the art. Future difficulties and projects will be discussed.

  7. Method to determine 226Ra in small sediment samples by ultralow background liquid scintillation.

    PubMed

    Sanchez-Cabeza, Joan-Albert; Kwong, Laval Liong Wee; Betti, Maria

    2010-08-15

    (210)Pb dating of sediment cores is a widely used tool to reconstruct ecosystem evolution and historical pollution during the last century. Although (226)Ra can be determined by gamma spectrometry, this method shows severe limitations which are, among others, sample size requirements and counting times. In this work, we propose a new strategy based on the analysis of (210)Pb through (210)Po in equilibrium by alpha spectrometry, followed by the determination of (226)Ra (base or supported (210)Pb) without any further chemical purification by liquid scintillation and with a higher sample throughput. Although gamma spectrometry might still be required to determine (137)Cs as an independent tracer, the effort can then be focused only on those sections dated around 1963, when maximum activities are expected. In this work, we optimized the counting conditions, calibrated the system for changing quenching, and described the new method to determine (226)Ra in small sediment samples, after (210)Po determination, allowing a more precise determination of excess (210)Pb ((210)Pb(ex)). The method was validated with reference materials IAEA-384, IAEA-385, and IAEA-313.

  8. Underestimating extreme events in power-law behavior due to machine-dependent cutoffs

    NASA Astrophysics Data System (ADS)

    Radicchi, Filippo

    2014-11-01

    Power-law distributions are typical macroscopic features occurring in almost all complex systems observable in nature. As a result, researchers in quantitative analyses must often generate random synthetic variates obeying power-law distributions. The task is usually performed through standard methods that map uniform random variates into the desired probability space. Whereas all these algorithms are theoretically solid, in this paper we show that they are subject to severe machine-dependent limitations. As a result, two dramatic consequences arise: (i) the sampling in the tail of the distribution is not random but deterministic; (ii) the moments of the sample distribution, which are theoretically expected to diverge as functions of the sample sizes, converge instead to finite values. We provide quantitative indications for the range of distribution parameters that can be safely handled by standard libraries used in computational analyses. Whereas our findings indicate possible reinterpretations of numerical results obtained through flawed sampling methodologies, they also pave the way for the search for a concrete solution to this central issue shared by all quantitative sciences dealing with complexity.

  9. Sample size and power estimation for studies with health related quality of life outcomes: a comparison of four methods using the SF-36.

    PubMed

    Walters, Stephen J

    2004-05-25

    We describe and compare four different methods for estimating sample size and power, when the primary outcome of the study is a Health Related Quality of Life (HRQoL) measure. These methods are: 1. assuming a Normal distribution and comparing two means; 2. using a non-parametric method; 3. Whitehead's method based on the proportional odds model; 4. the bootstrap. We illustrate the various methods, using data from the SF-36. For simplicity this paper deals with studies designed to compare the effectiveness (or superiority) of a new treatment compared to a standard treatment at a single point in time. The results show that if the HRQoL outcome has a limited number of discrete values (< 7) and/or the expected proportion of cases at the boundaries is high (scoring 0 or 100), then we would recommend using Whitehead's method (Method 3). Alternatively, if the HRQoL outcome has a large number of distinct values and the proportion at the boundaries is low, then we would recommend using Method 1. If a pilot or historical dataset is readily available (to estimate the shape of the distribution) then bootstrap simulation (Method 4) based on this data will provide a more accurate and reliable sample size estimate than conventional methods (Methods 1, 2, or 3). In the absence of a reliable pilot set, bootstrapping is not appropriate and conventional methods of sample size estimation or simulation will need to be used. Fortunately, with the increasing use of HRQoL outcomes in research, historical datasets are becoming more readily available. Strictly speaking, our results and conclusions only apply to the SF-36 outcome measure. Further empirical work is required to see whether these results hold true for other HRQoL outcomes. However, the SF-36 has many features in common with other HRQoL outcomes: multi-dimensional, ordinal or discrete response categories with upper and lower bounds, and skewed distributions, so therefore, we believe these results and conclusions using the SF-36 will be appropriate for other HRQoL measures.

  10. Seasonal trophic structure of the Scotia Sea pelagic ecosystem considered through biomass spectra and stable isotope analysis

    NASA Astrophysics Data System (ADS)

    Tarling, G. A.; Stowasser, G.; Ward, P.; Poulton, A. J.; Zhou, M.; Venables, H. J.; McGill, R. A. R.; Murphy, E. J.

    2012-01-01

    The biomass size structure of pelagic communities provides a system level perspective that can be instructive when considering trophic interactions. Such perspectives can become even more powerful when combined with taxonomic information and stable isotope analysis. Here we apply these approaches to the pelagic community of the Scotia Sea (Southern Ocean) and consider the structure and development of trophic interactions over different years and seasons. Samples were collected from three open-ocean cruises during the austral spring 2006, summer 2008 and autumn 2009. Three main sampling techniques were employed: sampling bottles for microplankton (0-50 m), vertically hauled fine meshed nets for mesozooplankton (0-400 m) and coarse-meshed trawls for macrozooplankton and nekton (0-1000 m). All samples were identified to the lowest practicable taxonomic level and their abundance, individual body weight and biomass (in terms of carbon) estimated. Slopes of normalised biomass spectrum versus size showed a significant but not substantial difference between cruises and were between -1.09 and -1.06. These slopes were shallower than expected for a community at equilibrium and indicated that there was an accumulation of biomass in the larger size classes (10 1-10 5 mg C ind -1). A secondary structure of biomass domes was also apparent, with the domes being 2.5-3 log 10 intervals apart in spring and summer and 2 log 10 intervals apart in autumn. The recruitment of copepod-consuming macrozooplankton, Euphausia triacantha and Themisto gaudichaudii into an additional biomass dome was responsible for the decrease in the inter-dome interval in autumn. Predator to prey mass ratios estimated from stable isotope analysis reached a minimum in autumn while the estimated trophic level of myctophid fish was highest in that season. This reflected greater amounts of internal recycling and increased numbers of trophic levels in autumn compared to earlier times of the year. The accumulation of biomass in larger size classes throughout the year in the Scotia Sea may reflect the prevalence of species that store energy and have multiyear life-cycles.

  11. Can Genetic Analysis of Putative Blood Alzheimer’s Disease Biomarkers Lead to Identification of Susceptibility Loci?

    PubMed Central

    Huebinger, Ryan M.; Shewale, Shantanu J.; Koenig, Jessica L.; Mitchel, Jeffrey S.; O’Bryant, Sid E.; Waring, Stephen C.; Diaz-Arrastia, Ramon; Chasse, Scott

    2015-01-01

    Although 24 Alzheimer’s disease (AD) risk loci have been reliably identified, a large portion of the predicted heritability for AD remains unexplained. It is expected that additional loci of small effect will be identified with an increased sample size. However, the cost of a significant increase in Case-Control sample size is prohibitive. The current study tests whether exploring the genetic basis of endophenotypes, in this case based on putative blood biomarkers for AD, can accelerate the identification of susceptibility loci using modest sample sizes. Each endophenotype was used as the outcome variable in an independent GWAS. Endophenotypes were based on circulating concentrations of proteins that contributed significantly to a published blood-based predictive algorithm for AD. Endophenotypes included Monocyte Chemoattractant Protein 1 (MCP1), Vascular Cell Adhesion Molecule 1 (VCAM1), Pancreatic Polypeptide (PP), Beta2 Microglobulin (B2M), Factor VII (F7), Adiponectin (ADN) and Tenascin C (TN-C). Across the seven endophenotypes, 47 SNPs were associated with outcome with a p-value ≤1x10-7. Each signal was further characterized with respect to known genetic loci associated with AD. Signals for several endophenotypes were observed in the vicinity of CR1, MS4A6A/MS4A4E, PICALM, CLU, and PTK2B. The strongest signal was observed in association with Factor VII levels and was located within the F7 gene. Additional signals were observed in MAP3K13, ZNF320, ATP9B and TREM1. Conditional regression analyses suggested that the SNPs contributed to variation in protein concentration independent of AD status. The identification of two putatively novel AD loci (in the Factor VII and ATP9B genes), which have not been located in previous studies despite massive sample sizes, highlights the benefits of an endophenotypic approach for resolving the genetic basis for complex diseases. The coincidence of several of the endophenotypic signals with known AD loci may point to novel genetic interactions and should be further investigated. PMID:26625115

  12. Can Genetic Analysis of Putative Blood Alzheimer's Disease Biomarkers Lead to Identification of Susceptibility Loci?

    PubMed

    Barber, Robert C; Phillips, Nicole R; Tilson, Jeffrey L; Huebinger, Ryan M; Shewale, Shantanu J; Koenig, Jessica L; Mitchel, Jeffrey S; O'Bryant, Sid E; Waring, Stephen C; Diaz-Arrastia, Ramon; Chasse, Scott; Wilhelmsen, Kirk C

    2015-01-01

    Although 24 Alzheimer's disease (AD) risk loci have been reliably identified, a large portion of the predicted heritability for AD remains unexplained. It is expected that additional loci of small effect will be identified with an increased sample size. However, the cost of a significant increase in Case-Control sample size is prohibitive. The current study tests whether exploring the genetic basis of endophenotypes, in this case based on putative blood biomarkers for AD, can accelerate the identification of susceptibility loci using modest sample sizes. Each endophenotype was used as the outcome variable in an independent GWAS. Endophenotypes were based on circulating concentrations of proteins that contributed significantly to a published blood-based predictive algorithm for AD. Endophenotypes included Monocyte Chemoattractant Protein 1 (MCP1), Vascular Cell Adhesion Molecule 1 (VCAM1), Pancreatic Polypeptide (PP), Beta2 Microglobulin (B2M), Factor VII (F7), Adiponectin (ADN) and Tenascin C (TN-C). Across the seven endophenotypes, 47 SNPs were associated with outcome with a p-value ≤1x10(-7). Each signal was further characterized with respect to known genetic loci associated with AD. Signals for several endophenotypes were observed in the vicinity of CR1, MS4A6A/MS4A4E, PICALM, CLU, and PTK2B. The strongest signal was observed in association with Factor VII levels and was located within the F7 gene. Additional signals were observed in MAP3K13, ZNF320, ATP9B and TREM1. Conditional regression analyses suggested that the SNPs contributed to variation in protein concentration independent of AD status. The identification of two putatively novel AD loci (in the Factor VII and ATP9B genes), which have not been located in previous studies despite massive sample sizes, highlights the benefits of an endophenotypic approach for resolving the genetic basis for complex diseases. The coincidence of several of the endophenotypic signals with known AD loci may point to novel genetic interactions and should be further investigated.

  13. The influence of pore textures on the permeability of volcanic rocks

    NASA Astrophysics Data System (ADS)

    Mueller, S.; Spieler, O.; Scheu, B.; Dingwell, D.

    2006-12-01

    The permeability of a porous medium is strongly dependent on its porosity, as a higher proportion of pore volume is generally expected to lead to a greater probability of pore interconnectedness and the formation of a fluid-flow providing pathway. However, the relationship between permeability and porosity is not a unique one, as many other textural parameters may play an important role and substantially affect gas flow properties. Among these parameters are (a) the connection geometry (i.e. intergranular pore spaces in clastic sediments vs. bubble interconnections), (b) the pore sizes, (c) pore shape and (d) pore size distribution. The gas permeability of volcanic rocks may influence various eruptive processes. The transition from a quiescent degassing dome to rock failure (fragmentation) may, for example, be controlled by the rock's permeability, in as much as it affects the speed by which a gas overpressure in vesicles is reduced in response to decompression. It is therefore essential to understand and quantify influences of different pore textures on the degassing properties of volcanic rocks, as well as investigate the effects of permeability on eruptive processes. Using a modified shock-tube-based fragmentation apparatus, we have measured unsteady-state permeability at a high initial pressure differential. Following sudden decompression above the rock cylinder, pressurized gas flows through the sample in a steel autoclave. A transient 1D filtration code has been developed to calculate permeability using the experimental pressure decay curve within a defined volume below the sample. An external furnace around the autoclave and the use of compressed salt as sealant allows also measurements at high temperatures up to 800 °C. Over 130 permeability measurements have been performed on samples of different volcanic settings, covering a wide range of porosity. The results show a general positive relationship between porosity and permeability with a high data scatter. Analysis of the samples eruptive origin as well as the pore sizes, shapes and size distribution allow an estimation of the contribution of various textural effects to the overall permeability.

  14. The development of miniplex primer sets for the analysis of degraded DNA

    NASA Astrophysics Data System (ADS)

    McCord, Bruce; Opel, Kerry; Chung, Denise; Drabek, Jiri; Tatarek, Nancy; Meadows Jantz, Lee; Butler, John

    2005-05-01

    In this project, a new set of multiplexed PCR reactions has been developed for the analysis of degraded DNA. These DNA markers, known as Miniplexes, utilize primers that have shorter amplicons for use in short tandem repeat (STR) analysis of degraded DNA. In our work we have defined six of these new STR multiplexes, each of which consists of 3 to 4 reduced size STR loci, and each labeled with a different fluorescent dye. When compared to commercially available STR systems, reductions in size of up to 300 basepairs are possible. In addition, these newly designed amplicons consist of loci that are fully compatible with the the national computer DNA database known as CODIS. To demonstrate compatibility with commercial STR kits, a concordance study of 532 DNA samples of Caucasian, African American, and Hispanic origin was undertaken There was 99.77% concordance between allele calls with the two methods. Of these 532 samples, only 15 samples showed discrepancies at one of 12 loci. These occurred predominantly at 2 loci, vWA and D13S317. DNA sequencing revealed that these locations had deletions between the two primer binding sites. Uncommon deletions like these can be expected in certain samples and will not affect the utility of the Miniplexes as tools for degraded DNA analysis. The Miniplexes were also applied to enzymatically digested DNA to assess their potential in degraded DNA analysis. The results demonstrated a greatly improved efficiency in the analysis of degraded DNA when compared to commercial STR genotyping kits. A series of human skeletal remains that had been exposed to a variety of environmental conditions were also examined. Sixty-four percent of the samples generated full profiles when amplified with the Miniplexes, while only sixteen percent of the samples tested generated full profiles with a commercial kit. In addition, complete profiles were obtained for eleven of the twelve Miniplex loci which had amplicon size ranges less than 200 base pairs. These data clearly demonstrate that smaller PCR amplicons provide an attractive alternative to mitochondrial DNA for forensic analysis of degraded DNA.

  15. Laboratory-based observations of capillary barriers and preferential flow in layered snow

    NASA Astrophysics Data System (ADS)

    Avanzi, F.; Hirashima, H.; Yamaguchi, S.; Katsushima, T.; De Michele, C.

    2015-12-01

    Several evidences are nowadays available that show how the effects of capillary gradients and preferential flow on water transmission in snow may play a more important role than expected. To observe these processes and to contribute in their characterization, we performed observations on the development of capillary barriers and preferential flow patterns in layered snow during cold laboratory experiments. We considered three different layering (all characterized by a finer-over-coarser texture in grain size) and three different water input rates. Nine samples of layered snow were sieved in a cold laboratory, and subjected to a constant supply of dyed tracer. By means of visual inspection, horizontal sectioning and liquid water content measurements, the processes of ponding and preferential flow were characterized as a function of texture and water input rate. The dynamics of each sample were replicated using the multi-layer physically-based SNOWPACK model. Results show that capillary barriers and preferential flow are relevant processes ruling the speed of liquid water in stratified snow. Ponding is associated with peaks in LWC at the boundary between the two layers equal to ~ 33-36 vol. % when the upper layer is composed by fine snow (grain size smaller than 0.5 mm). The thickness of the ponding layer at the textural boundary is between 0 and 3 cm, depending on sample stratigraphy. Heterogeneity in water transmission increases with grain size, while we do not observe any clear dependency on water input rate. The extensive comparison between observed and simulated LWC profiles by SNOWPACK (using an approximation of Richards Equation) shows high performances by the model in estimating the LWC peak over the boundary, while water speed in snow is underestimated by the chosen water transport scheme.

  16. The Diversity of Diffuse Ly α Nebulae around Star-forming Galaxies at High Redshift

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xue, Rui; Lee, Kyoung-Soo; Dey, Arjun

    2017-03-10

    We report the detection of diffuse Ly α emission, or Ly α halos (LAHs), around star-forming galaxies at z ≈ 3.78 and 2.66 in the NOAO Deep Wide-Field Survey Boötes field. Our samples consist of a total of ∼1400 galaxies, within two separate regions containing spectroscopically confirmed galaxy overdensities. They provide a unique opportunity to investigate how the LAH characteristics vary with host galaxy large-scale environment and physical properties. We stack Ly α images of different samples defined by these properties and measure their median LAH sizes by decomposing the stacked Ly α radial profile into a compact galaxy-like andmore » an extended halo-like component. We find that the exponential scale-length of LAHs depends on UV continuum and Ly α luminosities, but not on Ly α equivalent widths or galaxy overdensity parameters. The full samples, which are dominated by low UV-continuum luminosity Ly α emitters ( M {sub UV} ≳ −21), exhibit LAH sizes of 5–6 kpc. However, the most UV- or Ly α- luminous galaxies have more extended halos with scale-lengths of 7–9 kpc. The stacked Ly α radial profiles decline more steeply than recent theoretical predictions that include the contributions from gravitational cooling of infalling gas and from low-level star formation in satellites. However, the LAH extent matches what one would expect for photons produced in the galaxy and then resonantly scattered by gas in an outflowing envelope. The observed trends of LAH sizes with host galaxy properties suggest that the physical conditions of the circumgalactic medium (covering fraction, H i column density, and outflow velocity) change with halo mass and/or star formation rates.« less

  17. The Diversity of Diffuse Lyα Nebulae around Star-forming Galaxies at High Redshift

    NASA Astrophysics Data System (ADS)

    Xue, Rui; Lee, Kyoung-Soo; Dey, Arjun; Reddy, Naveen; Hong, Sungryong; Prescott, Moire K. M.; Inami, Hanae; Jannuzi, Buell T.; Gonzalez, Anthony H.

    2017-03-01

    We report the detection of diffuse Lyα emission, or Lyα halos (LAHs), around star-forming galaxies at z ≈ 3.78 and 2.66 in the NOAO Deep Wide-Field Survey Boötes field. Our samples consist of a total of ˜1400 galaxies, within two separate regions containing spectroscopically confirmed galaxy overdensities. They provide a unique opportunity to investigate how the LAH characteristics vary with host galaxy large-scale environment and physical properties. We stack Lyα images of different samples defined by these properties and measure their median LAH sizes by decomposing the stacked Lyα radial profile into a compact galaxy-like and an extended halo-like component. We find that the exponential scale-length of LAHs depends on UV continuum and Lyα luminosities, but not on Lyα equivalent widths or galaxy overdensity parameters. The full samples, which are dominated by low UV-continuum luminosity Lyα emitters (M UV ≳ -21), exhibit LAH sizes of 5-6 kpc. However, the most UV- or Lyα-luminous galaxies have more extended halos with scale-lengths of 7-9 kpc. The stacked Lyα radial profiles decline more steeply than recent theoretical predictions that include the contributions from gravitational cooling of infalling gas and from low-level star formation in satellites. However, the LAH extent matches what one would expect for photons produced in the galaxy and then resonantly scattered by gas in an outflowing envelope. The observed trends of LAH sizes with host galaxy properties suggest that the physical conditions of the circumgalactic medium (covering fraction, H I column density, and outflow velocity) change with halo mass and/or star formation rates.

  18. The Power of Teacher Expectations: How Racial Bias Hinders Student Attainment

    ERIC Educational Resources Information Center

    Gershenson, Seth; Papageorge, Nicholas

    2018-01-01

    Despite abundant anecdotes and theories suggesting a causal effect of teachers' expectations on student outcomes, documenting its presence and size has been challenging. The reason is simple: positive correlations between what teachers expect and what students ultimately accomplish might simply result from teachers being skilled observers. In…

  19. Enhanced algorithms for stochastic programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishna, Alamuru S.

    1993-09-01

    In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less

  20. Phylogenetically resolving epidemiologic linkage

    PubMed Central

    Romero-Severson, Ethan O.; Bulla, Ingo; Leitner, Thomas

    2016-01-01

    Although the use of phylogenetic trees in epidemiological investigations has become commonplace, their epidemiological interpretation has not been systematically evaluated. Here, we use an HIV-1 within-host coalescent model to probabilistically evaluate transmission histories of two epidemiologically linked hosts. Previous critique of phylogenetic reconstruction has claimed that direction of transmission is difficult to infer, and that the existence of unsampled intermediary links or common sources can never be excluded. The phylogenetic relationship between the HIV populations of epidemiologically linked hosts can be classified into six types of trees, based on cladistic relationships and whether the reconstruction is consistent with the true transmission history or not. We show that the direction of transmission and whether unsampled intermediary links or common sources existed make very different predictions about expected phylogenetic relationships: (i) Direction of transmission can often be established when paraphyly exists, (ii) intermediary links can be excluded when multiple lineages were transmitted, and (iii) when the sampled individuals’ HIV populations both are monophyletic a common source was likely the origin. Inconsistent results, suggesting the wrong transmission direction, were generally rare. In addition, the expected tree topology also depends on the number of transmitted lineages, the sample size, the time of the sample relative to transmission, and how fast the diversity increases after infection. Typically, 20 or more sequences per subject give robust results. We confirm our theoretical evaluations with analyses of real transmission histories and discuss how our findings should aid in interpreting phylogenetic results. PMID:26903617

  1. Phylogenetically resolving epidemiologic linkage

    DOE PAGES

    Romero-Severson, Ethan O.; Bulla, Ingo; Leitner, Thomas

    2016-02-22

    The use of phylogenetic trees in epidemiological investigations has become commonplace, but their epidemiological interpretation has not been systematically evaluated. Here, we use an HIV-1 within-host coalescent model to probabilistically evaluate transmission histories of two epidemiologically linked hosts. Previous critique of phylogenetic reconstruction has claimed that direction of transmission is difficult to infer, and that the existence of unsampled intermediary links or common sources can never be excluded. The phylogenetic relationship between the HIV populations of epidemiologically linked hosts can be classified into six types of trees, based on cladistic relationships and whether the reconstruction is consistent with the truemore » transmission history or not. We show that the direction of transmission and whether unsampled intermediary links or common sources existed make very different predictions about expected phylogenetic relationships: (i) Direction of transmission can often be established when paraphyly exists, (ii) intermediary links can be excluded when multiple lineages were transmitted, and (iii) when the sampled individuals’ HIV populations both are monophyletic a common source was likely the origin. Inconsistent results, suggesting the wrong transmission direction, were generally rare. In addition, the expected tree topology also depends on the number of transmitted lineages, the sample size, the time of the sample relative to transmission, and how fast the diversity increases after infection. Typically, 20 or more sequences per subject give robust results. Moreover, we confirm our theoretical evaluations with analyses of real transmission histories and discuss how our findings should aid in interpreting phylogenetic results.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero-Severson, Ethan O.; Bulla, Ingo; Leitner, Thomas

    The use of phylogenetic trees in epidemiological investigations has become commonplace, but their epidemiological interpretation has not been systematically evaluated. Here, we use an HIV-1 within-host coalescent model to probabilistically evaluate transmission histories of two epidemiologically linked hosts. Previous critique of phylogenetic reconstruction has claimed that direction of transmission is difficult to infer, and that the existence of unsampled intermediary links or common sources can never be excluded. The phylogenetic relationship between the HIV populations of epidemiologically linked hosts can be classified into six types of trees, based on cladistic relationships and whether the reconstruction is consistent with the truemore » transmission history or not. We show that the direction of transmission and whether unsampled intermediary links or common sources existed make very different predictions about expected phylogenetic relationships: (i) Direction of transmission can often be established when paraphyly exists, (ii) intermediary links can be excluded when multiple lineages were transmitted, and (iii) when the sampled individuals’ HIV populations both are monophyletic a common source was likely the origin. Inconsistent results, suggesting the wrong transmission direction, were generally rare. In addition, the expected tree topology also depends on the number of transmitted lineages, the sample size, the time of the sample relative to transmission, and how fast the diversity increases after infection. Typically, 20 or more sequences per subject give robust results. Moreover, we confirm our theoretical evaluations with analyses of real transmission histories and discuss how our findings should aid in interpreting phylogenetic results.« less

  3. The development and validation of the client expectations of massage scale.

    PubMed

    Boulanger, Karen T; Campo, Shelly; Glanville, Jennifer L; Lowe, John B; Yang, Jingzhen

    2012-01-01

    Although there is evidence that client expectations influence client outcomes, a valid and reliable scale for measuring the range of client expectations for both massage therapy and the behaviors of their massage therapists does not exist. Understanding how client expectations influence client outcomes would provide insight into how massage achieves its reported effects. To develop and validate the Client Expectations of Massage Scale (CEMS), a measure of clients' clinical, educational, interpersonal, and outcome expectations. Offices of licensed massage therapists in Iowa. A practice-based research methodology was used to collect data from two samples of massage therapy clients. For Sample 1, 21 volunteer massage therapists collected data from their clients before the massage. Factor analysis was conducted to test construct validity and coefficient alpha was used to assess reliability. Correlational analyses with the CEMS, previous measures of client expectations, and the Life Orientation Test-Revised were examined to test the convergent and discriminant validity of the CEMS. For Sample 2, 24 massage therapists distributed study materials for clients to complete before and after a massage therapy session. Structural equation modeling was used to assess the construct, discriminant, and predictive validity of the CEMS. Sample 1 involved 320 and Sample 2 involved 321 adult massage clients. Standard care provided by licensed massage therapists. Numeric Rating Scale for pain and Positive and Negative Affect Schedule-Revised (including the Serenity subscale). The CEMS demonstrated good construct, convergent, discriminant and predictive validity, and adequate reliability. Client expectations were generally positive toward massage and their massage therapists. Positive outcome expectations had a positive effect on clients' changes in pain and serenity. High interpersonal expectations had a negative effect on clients' changes in serenity. Client expectations contribute to the nonspecific effects of massage therapy.

  4. The influence of (public) health expenditure on longevity.

    PubMed

    Aísa, Rosa; Clemente, Jesús; Pueyo, Fernando

    2014-10-01

    We report new evidence on the contribution of health expenditure to increasing life expectancy in OECD countries, differentiating the effects of public and private health expenditures. A theoretical model is presented and estimated though a cross-country fixed effects multiple regression analysis for a sample of OECD countries over the period 1980-2000. Although the effect of aggregate health expenditure is not conclusive, public health expenditure plays a significant role in enhancing longevity. However, its influence diminishes as the size of the public health sector on GDP expands, reaching a maximum around the 8 %. With the influence of public health expenditure being positive, the ambiguous effect of the aggregate expenditure suggests that the weight of public and private health sectors matters, the second having a lower impact on longevity. This might explain the poor evolution of the life expectancy in countries with a high amount of private resources devoted to health. In such cases, an extension of public services could give rise to a better outcome from the overall health investment.

  5. TEM in situ micropillar compression tests of ion irradiated oxide dispersion strengthened alloy

    NASA Astrophysics Data System (ADS)

    Yano, K. H.; Swenson, M. J.; Wu, Y.; Wharry, J. P.

    2017-01-01

    The growing role of charged particle irradiation in the evaluation of nuclear reactor candidate materials requires the development of novel methods to assess mechanical properties in near-surface irradiation damage layers just a few micrometers thick. In situ transmission electron microscopic (TEM) mechanical testing is one such promising method. In this work, microcompression pillars are fabricated from a Fe2+ ion irradiated bulk specimen of a model Fe-9%Cr oxide dispersion strengthened (ODS) alloy. Yield strengths measured directly from TEM in situ compression tests are within expected values, and are consistent with predictions based on the irradiated microstructure. Measured elastic modulus values, once adjusted for the amount of deformation and deflection in the base material, are also within the expected range. A pillar size effect is only observed in samples with minimum dimension ≤100 nm due to the low inter-obstacle spacing in the as received and irradiated material. TEM in situ micropillar compression tests hold great promise for quantitatively determining mechanical properties of shallow ion-irradiated layers.

  6. Morphological variation in Homo erectus and the origins of developmental plasticity

    PubMed Central

    Antón, Susan C.; Taboada, Hannah G.; Middleton, Emily R.; Rainwater, Christopher W.; Taylor, Andrea B.; Turner, Trudy R.; Turnquist, Jean E.; Weinstein, Karen J.; Williams, Scott A.

    2016-01-01

    Homo erectus was the first hominin to exhibit extensive range expansion. This extraordinary departure from Africa, especially into more temperate climates of Eurasia, has been variously related to technological, energetic and foraging shifts. The temporal and regional anatomical variation in H. erectus suggests that a high level of developmental plasticity, a key factor in the ability of H. sapiens to occupy a variety of habitats, may also have been present in H. erectus. Developmental plasticity, the ability to modify development in response to environmental conditions, results in differences in size, shape and dimorphism across populations that relate in part to levels of resource sufficiency and extrinsic mortality. These differences predict not only regional variations but also overall smaller adult sizes and lower levels of dimorphism in instances of resource scarcity and high predator load. We consider the metric variation in 35 human and non-human primate ‘populations’ from known environmental contexts and 14 time- and space-restricted paleodemes of H. erectus and other fossil Homo. Human and non-human primates exhibit more similar patterns of variation than expected, with plasticity evident, but in differing patterns by sex across populations. The fossil samples show less evidence of variation than expected, although H. erectus varies more than Neandertals. This article is part of the themed issue ‘Major transitions in human evolution’. PMID:27298467

  7. Historical biome distribution and recent human disturbance shape the diversity of arbuscular mycorrhizal fungi.

    PubMed

    Pärtel, Meelis; Öpik, Maarja; Moora, Mari; Tedersoo, Leho; Szava-Kovats, Robert; Rosendahl, Søren; Rillig, Matthias C; Lekberg, Ylva; Kreft, Holger; Helgason, Thorunn; Eriksson, Ove; Davison, John; de Bello, Francesco; Caruso, Tancredi; Zobel, Martin

    2017-10-01

    The availability of global microbial diversity data, collected using standardized metabarcoding techniques, makes microorganisms promising models for investigating the role of regional and local factors in driving biodiversity. Here we modelled the global diversity of symbiotic arbuscular mycorrhizal (AM) fungi using currently available data on AM fungal molecular diversity (small subunit (SSU) ribosomal RNA (rRNA) gene sequences) in field samples. To differentiate between regional and local effects, we estimated species pools (sets of potentially suitable taxa) for each site, which are expected to reflect regional processes. We then calculated community completeness, an index showing the fraction of the species pool present, which is expected to reflect local processes. We found significant spatial variation, globally in species pool size, as well as in local and dark diversity (absent members of the species pool). Species pool size was larger close to areas containing tropical grasslands during the last glacial maximum, which are possible centres of diversification. Community completeness was greater in regions of high wilderness (remoteness from human disturbance). Local diversity was correlated with wilderness and current connectivity to mountain grasslands. Applying the species pool concept to symbiotic fungi facilitated a better understanding of how biodiversity can be jointly shaped by large-scale historical processes and recent human disturbance. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  8. Efficient isolation method for high-quality genomic DNA from cicada exuviae.

    PubMed

    Nguyen, Hoa Quynh; Kim, Ye Inn; Borzée, Amaël; Jang, Yikweon

    2017-10-01

    In recent years, animal ethics issues have led researchers to explore nondestructive methods to access materials for genetic studies. Cicada exuviae are among those materials because they are cast skins that individuals left after molt and are easily collected. In this study, we aim to identify the most efficient extraction method to obtain high quantity and quality of DNA from cicada exuviae. We compared relative DNA yield and purity of six extraction protocols, including both manual protocols and available commercial kits, extracting from four different exoskeleton parts. Furthermore, amplification and sequencing of genomic DNA were evaluated in terms of availability of sequencing sequence at the expected genomic size. Both the choice of protocol and exuvia part significantly affected DNA yield and purity. Only samples that were extracted using the PowerSoil DNA Isolation kit generated gel bands of expected size as well as successful sequencing results. The failed attempts to extract DNA using other protocols could be partially explained by a low DNA yield from cicada exuviae and partly by contamination with humic acids that exist in the soil where cicada nymphs reside before emergence, as shown by spectroscopic measurements. Genomic DNA extracted from cicada exuviae could provide valuable information for species identification, allowing the investigation of genetic diversity across consecutive broods, or spatiotemporal variation among various populations. Consequently, we hope to provide a simple method to acquire pure genomic DNA applicable for multiple research purposes.

  9. Analysis of Darwin Rainfall Data: Implications on Sampling Strategy

    NASA Technical Reports Server (NTRS)

    Rafael, Qihang Li; Bras, Rafael L.; Veneziano, Daniele

    1996-01-01

    Rainfall data collected by radar in the vicinity of Darwin, Australia, have been analyzed in terms of their mean, variance, autocorrelation of area-averaged rain rate, and diurnal variation. It is found that, when compared with the well-studied GATE (Global Atmospheric Research Program Atlantic Tropical Experiment) data, Darwin rainfall has larger coefficient of variation (CV), faster reduction of CV with increasing area size, weaker temporal correlation, and a strong diurnal cycle and intermittence. The coefficient of variation for Darwin rainfall has larger magnitude and exhibits larger spatial variability over the sea portion than over the land portion within the area of radar coverage. Stationary, and nonstationary models have been used to study the sampling errors associated with space-based rainfall measurement. The nonstationary model shows that the sampling error is sensitive to the starting sampling time for some sampling frequencies, due to the diurnal cycle of rain, but not for others. Sampling experiments using data also show such sensitivity. When the errors are averaged over starting time, the results of the experiments and the stationary and nonstationary models match each other very closely. In the small areas for which data are available for I>oth Darwin and GATE, the sampling error is expected to be larger for Darwin due to its larger CV.

  10. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Characterization of helium diffusion behavior from continuous heating experiments: Sample screening and identification of multiple 4He components

    NASA Astrophysics Data System (ADS)

    McDannell, K. T.; Idleman, B. D.; Zeitler, P. K.

    2015-12-01

    Old, slowly cooled apatites often yield overdispersed helium ages due to factors such as parent zonation, He implantation, radiation damage, crystal defects, and fluid inclusions. Careful mineral selection and many replicate analyses can mitigate the impact of some of these effects. However, this approach adds unnecessary costs in time and resources when dating well-behaved apatites and is generally ineffective at identifying the root cause of age dispersion and providing suitable age corrections for poorly behaved samples. We assess a new technique utilizing static-gas measurement during continuous heating as a means to rapidly screen apatite samples. In about the time required for a conventional total-gas analysis, this method can discriminate between samples showing the volume-diffusion behavior expected for apatite and those showing anomalous release patterns, inconsistent with their use in thermochronologic applications. This method may also have the potential to quantify and discriminate between the radiogenic and extraneous 4He fractions released by a sample. Continuously heated samples that outgas by volume diffusion during a linear heating schedule should produce a characteristic sigmoidal 4He fractional loss profile, with the exact shape and position of these profiles (in loss vs. heating time space) controlled by sample kinetics, grain size, and heating rate. Secondary factors such as sample zoning and alpha-loss distribution have a relatively minor impact on such profiles. Well-behaved examples such as the Durango standard and other apatites with good age reproducibility show the expected smooth, sigmoidal gas release with complete exhaustion by temperatures predicted for volume diffusion using typical apatite kinetics (e.g., by ~900˚C for linear heating at 20˚C/minute). In contrast, "bad actor" samples that do not replicate well show significant degrees of helium release deferred to higher temperatures. We report on screening results for a range of samples including a suite of slowly cooled Cretaceous apatites from the Hangay Dome in central Mongolia, assessing the degree to which screening using cumulative heating can reliably identify bad-actor grains, and possibly, correct their ages.

  12. Lot quality assurance sampling for screening communities hyperendemic for Schistosoma mansoni.

    PubMed

    Rabarijaona, L P; Boisier, P; Ravaoalimalala, V E; Jeanne, I; Roux, J F; Jutand, M A; Salamon, R

    2003-04-01

    Lot quality assurance sampling (LQAS) was evaluated for rapid low cost identification of communities where Schistosoma mansoni infection was hyperendemic in southern Madagascar. In the study area, S. mansoni infection shows very focused and heterogeneous distribution requiring multifariousness of local surveys. One sampling plan was tested in the field with schoolchildren and several others were simulated in the laboratory. Randomization and stool specimen collection were performed by voluntary teachers under direct supervision of the study staff and no significant problem occurred. As expected from Receiver Operating Characteristic (ROC) curves, all sampling plans allowed correct identification of hyperendemic communities and of most of the hypoendemic ones. Frequent misclassifications occurred for communities with intermediate prevalence and the cheapest plans had very low specificity. The study confirmed that LQAS would be a valuable tool for large scale screening in a country with scarce financial and staff resources. Involving teachers, appeared to be quite feasible and should not lower the reliability of surveys. We recommend that the national schistosomiasis control programme systematically uses LQAS for identification of communities, provided that sample sizes are adapted to the specific epidemiological patterns of S. mansoni infection in the main regions.

  13. Real-time specific surface area measurements via laser-induced breakdown spectroscopy

    USGS Publications Warehouse

    Washburn, Kathryn E.; Birdwell, Justin E.; Howard, James E.

    2017-01-01

    From healthcare to cosmetics to environmental science, the specific surface area (SSA) of micro- and mesoporous materials or products can greatly affect their chemical and physical properties. SSA results are also widely used to examine source rocks in conventional and unconventional petroleum resource plays. Despite its importance, current methods to measure SSA are often cumbersome, time-consuming, or require cryogenic consumables (e.g., liquid nitrogen). These methods are not amenable to high-throughput environments, have stringent sample preparation requirements, and are not practical for use in the field. We present a new application of laser-induced breakdown spectroscopy for rapid measurement of SSA. This study evaluates geological samples, specifically organic-rich oil shales, but the approach is expected to be applicable to many other types of materials. The method uses optical emission spectroscopy to examine laser-generated plasma and quantify the amount of argon adsorbed to a sample during an inert gas purge. The technique can accommodate a wide range of sample sizes and geometries and has the potential for field use. These advantages for SSA measurement combined with the simultaneous acquisition of composition information make this a promising new approach for characterizing geologic samples and other materials.

  14. NEWLY QUENCHED GALAXIES AS THE CAUSE FOR THE APPARENT EVOLUTION IN AVERAGE SIZE OF THE POPULATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carollo, C. M.; Bschorr, T. J.; Lilly, S. J.

    2013-08-20

    We use the large COSMOS sample of galaxies to study in an internally self-consistent way the change in the number densities of quenched early-type galaxies (Q-ETGs) of a given size over the redshift interval 0.2 < z < 1 in order to study the claimed size evolution of these galaxies. In a stellar mass bin at 10{sup 10.5} < M{sub galaxy} < 10{sup 11} M{sub Sun }, we see no change in the number density of compact Q-ETGs over this redshift range, while in a higher mass bin at >10{sup 11} M{sub Sun }, where we would expect merging tomore » be more significant, we find a small decrease, by {approx}30%. In both mass bins, the increase of the median sizes of Q-ETGs with time is primarily caused by the addition to the size function of larger and more diffuse Q-ETGs. At all masses, compact Q-ETGs become systematically redder toward later epochs, with a (U - V) color difference which is consistent with a passive evolution of their stellar populations, indicating that they are a stable population that does not appreciably evolve in size. We find furthermore, at all epochs, that the larger Q-ETGs (at least in the lower mass bin) have average rest-frame colors that are systematically bluer than those of the more compact Q-ETGs, suggesting that the former are indeed younger than the latter. The idea that new, large, Q-ETGs are responsible for the observed growth in the median size of the population at a given mass is also supported by analysis of the sizes and number of the star-forming galaxies that are expected to be the progenitors of the new Q-ETGs over the same period. In the low mass bin, the new Q-ETGs appear to have {approx}30% smaller half-light radii than their star-forming progenitors. This is likely due to the fading of their disks after they cease star formation. Comparison with higher redshifts shows that the median size of newly quenched galaxies roughly scales, at constant mass, as (1 + z){sup -1}. We conclude that the dominant cause of the size evolution seen in the Q-ETG population is that the average sizes and thus stellar densities of individual Q-ETGs roughly scale with the average density of the universe at the time when they were quenched, and that subsequent size changes in individual objects, through merging or other processes, are of secondary importance, especially at masses below 10{sup 11} M{sub Sun}.« less

  15. The legibility of prescription medication labelling in Canada

    PubMed Central

    Ahrens, Kristina; Krishnamoorthy, Abinaya; Gold, Deborah; Rojas-Fernandez, Carlos H.

    2014-01-01

    Introduction: The legibility of medication labelling is a concern for all Canadians, because poor or illegible labelling may lead to miscommunication of medication information and poor patient outcomes. There are currently few guidelines and no regulations regarding print standards on medication labels. This study analyzed sample prescription labels from Ontario, Canada, and compared them with print legibility guidelines (both generic and specific to medication labels). Methods: Cluster sampling was used to randomly select a total of 45 pharmacies in the tri-cities of Kitchener, Waterloo and Cambridge. Pharmacies were asked to supply a regular label with a hypothetical prescription. The print characteristics of patient-critical information were compared against the recommendations for prescription labels by pharmaceutical and health organizations and for print accessibility by nongovernmental organizations. Results: More than 90% of labels followed the guidelines for font style, contrast, print colour and nonglossy paper. However, only 44% of the medication instructions met the minimum guideline of 12-point print size, and none of the drug or patient names met this standard. Only 5% of the labels were judged to make the best use of space, and 51% used left alignment. None of the instructions were in sentence case, as is recommended. Discussion: We found discrepancies between guidelines and current labels in print size, justification, spacing and methods of emphasis. Conclusion: Improvements in pharmacy labelling are possible without moving to new technologies or changing the size of labels and would be expected to enhance patient outcomes. PMID:24847371

  16. Geometric k-nearest neighbor estimation of entropy and mutual information

    NASA Astrophysics Data System (ADS)

    Lord, Warren M.; Sun, Jie; Bollt, Erik M.

    2018-03-01

    Nonparametric estimation of mutual information is used in a wide range of scientific problems to quantify dependence between variables. The k-nearest neighbor (knn) methods are consistent, and therefore expected to work well for a large sample size. These methods use geometrically regular local volume elements. This practice allows maximum localization of the volume elements, but can also induce a bias due to a poor description of the local geometry of the underlying probability measure. We introduce a new class of knn estimators that we call geometric knn estimators (g-knn), which use more complex local volume elements to better model the local geometry of the probability measures. As an example of this class of estimators, we develop a g-knn estimator of entropy and mutual information based on elliptical volume elements, capturing the local stretching and compression common to a wide range of dynamical system attractors. A series of numerical examples in which the thickness of the underlying distribution and the sample sizes are varied suggest that local geometry is a source of problems for knn methods such as the Kraskov-Stögbauer-Grassberger estimator when local geometric effects cannot be removed by global preprocessing of the data. The g-knn method performs well despite the manipulation of the local geometry. In addition, the examples suggest that the g-knn estimators can be of particular relevance to applications in which the system is large, but the data size is limited.

  17. Evaluation of a flow cytometry method to determine size and real refractive index distributions in natural marine particle populations.

    PubMed

    Agagliate, Jacopo; Röttgers, Rüdiger; Twardowski, Michael S; McKee, David

    2018-03-01

    A flow cytometric (FC) method was developed to retrieve particle size distributions (PSDs) and real refractive index (n r ) information in natural waters. Geometry and signal response of the sensors within the flow cytometer (CytoSense, CytoBuoy b.v., Netherlands) were characterized to form a scattering inversion model based on Mie theory. The procedure produced a mesh of diameter and n r isolines where each particle is assigned the diameter and n r values of the closest node, producing PSDs and particle real refractive index distributions. The method was validated using polystyrene bead standards of known diameter and polydisperse suspensions of oil with known n r , and subsequently applied to natural samples collected across a broad range of UK shelf seas. FC PSDs were compared with independent PSDs produced from data of two LISST-100X instruments (type B and type C). PSD slopes and features were found to be consistent between the FC and the two LISST-100X instruments, but LISST concentrations were found in disagreement with FC concentrations and with each other. FC n r values were found to agree with expected refractive index values of typical marine particle components across all samples considered. The determination of particle size and refractive index distributions enabled by the FC method has potential to facilitate identification of the contribution of individual subpopulations to the bulk inherent optical properties and biogeochemical properties of the particle population.

  18. Colloidal Synthesis of Te-Doped Bi Nanoparticles: Low-Temperature Charge Transport and Thermoelectric Properties.

    PubMed

    Gu, Da Hwi; Jo, Seungki; Jeong, Hyewon; Ban, Hyeong Woo; Park, Sung Hoon; Heo, Seung Hwae; Kim, Fredrick; Jang, Jeong In; Lee, Ji Eun; Son, Jae Sung

    2017-06-07

    Electronically doped nanoparticles formed by incorporation of impurities have been of great interest because of their controllable electrical properties. However, the development of a strategy for n-type or p-type doping on sub-10 nm-sized nanoparticles under the quantum confinement regime is very challenging using conventional processes, owing to the difficulty in synthesis. Herein, we report the colloidal chemical synthesis of sub-10 nm-sized tellurium (Te)-doped Bismuth (Bi) nanoparticles with precisely controlled Te content from 0 to 5% and systematically investigate their low-temperature charge transport and thermoelectric properties. Microstructural characterization of nanoparticles demonstrates that Te ions are successfully incorporated into Bi nanoparticles rather than remaining on the nanoparticle surfaces. Low-temperature Hall measurement results of the hot-pressed Te-doped Bi-nanostructured materials, with grain sizes ranging from 30 to 60 nm, show that the charge transport properties are governed by the doping content and the related impurity and nanoscale grain boundary scatterings. Furthermore, the low-temperature thermoelectric properties reveal that the electrical conductivity and Seebeck coefficient expectedly change with the Te content, whereas the thermal conductivity is significantly reduced by Te doping because of phonon scattering at the sites arising from impurities and nanoscale grain boundaries. Accordingly, the 1% Te-doped Bi sample exhibits a higher figure-of-merit ZT by ∼10% than that of the undoped sample. The synthetic strategy demonstrated in this study offers the possibility of electronic doping of various quantum-confined nanoparticles for diverse applications.

  19. High-resolution Antibody Array Analysis of Childhood Acute Leukemia Cells*

    PubMed Central

    Kanderova, Veronika; Kuzilkova, Daniela; Stuchly, Jan; Vaskova, Martina; Brdicka, Tomas; Fiser, Karel; Hrusak, Ondrej; Lund-Johansen, Fridtjof

    2016-01-01

    Acute leukemia is a disease pathologically manifested at both genomic and proteomic levels. Molecular genetic technologies are currently widely used in clinical research. In contrast, sensitive and high-throughput proteomic techniques for performing protein analyses in patient samples are still lacking. Here, we used a technology based on size exclusion chromatography followed by immunoprecipitation of target proteins with an antibody bead array (Size Exclusion Chromatography-Microsphere-based Affinity Proteomics, SEC-MAP) to detect hundreds of proteins from a single sample. In addition, we developed semi-automatic bioinformatics tools to adapt this technology for high-content proteomic screening of pediatric acute leukemia patients. To confirm the utility of SEC-MAP in leukemia immunophenotyping, we tested 31 leukemia diagnostic markers in parallel by SEC-MAP and flow cytometry. We identified 28 antibodies suitable for both techniques. Eighteen of them provided excellent quantitative correlation between SEC-MAP and flow cytometry (p < 0.05). Next, SEC-MAP was applied to examine 57 diagnostic samples from patients with acute leukemia. In this assay, we used 632 different antibodies and detected 501 targets. Of those, 47 targets were differentially expressed between at least two of the three acute leukemia subgroups. The CD markers correlated with immunophenotypic categories as expected. From non-CD markers, we found DBN1, PAX5, or PTK2 overexpressed in B-cell precursor acute lymphoblastic leukemias, LAT, SH2D1A, or STAT5A overexpressed in T-cell acute lymphoblastic leukemias, and HCK, GLUD1, or SYK overexpressed in acute myeloid leukemias. In addition, OPAL1 overexpression corresponded to ETV6-RUNX1 chromosomal translocation. In summary, we demonstrated that SEC-MAP technology is a powerful tool for detecting hundreds of proteins in clinical samples obtained from pediatric acute leukemia patients. It provides information about protein size and reveals differences in protein expression between particular leukemia subgroups. Forty-seven of SEC-MAP identified targets were validated by other conventional method in this study. PMID:26785729

  20. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    PubMed

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  1. Sample Size Estimation: The Easy Way

    ERIC Educational Resources Information Center

    Weller, Susan C.

    2015-01-01

    This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…

  2. Planimetric correlation between the submandibular glands and the pancreas: a postmortem ductographic study.

    PubMed

    Stimec, Bojan V; Rakocevic, Zoran; Ignjatovic, Dejan; Fasel, Jean H D

    2018-01-01

    The salivary glands and pancreas have comparable anatomic and antigenic properties and can share common pathogenetic mechanisms involving toxic or autoimmune processes. The aim of this study is to assess the correlation in size between the normal submandibular glands and the pancreas. The study was based on human autopsy specimens of the pancreas, neck and oral base from 22 adults, both sexes (mean age, 57.9 years). The pancreatic and submandibular ducts were injected with a contrast medium, and the area of the salivary and pancreatic glandular ductograms was measured with the aid of software for quantification of visual information. Samples of tissue from the salivary glands and the pancreas were studied by means of light microscopy. A high correlation was found between the planimetric size of the pancreas and the submandibular glands (correlation coefficient 0.497 and 0.699 for the right and the left gland, respectively). This ratio was close to 5:1. There were no significant differences in size for the left vs. right submandibular gland (p = 0.39). The ductograms were significantly larger in size in males than in females (p < 0.001). This study has proven a positive correlation in planimetric size between the normal submandibular glands and pancreas, a result that is expected to have possible clinical implications in the long-term follow-up of patients with chronic pancreatitis.

  3. The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education

    ERIC Educational Resources Information Center

    Slavin, Robert; Smith, Dewi

    2009-01-01

    Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…

  4. Adolescent precursors of adult borderline personality pathology in a high-risk community sample.

    PubMed

    Conway, Christopher C; Hammen, Constance; Brennan, Patricia A

    2015-06-01

    Longitudinal studies of the exact environmental conditions and personal attributes contributing to the development of borderline personality disorder (BPD) are rare. Furthermore, existing research typically examines risk factors in isolation, limiting our knowledge of the relative effect sizes of different risk factors and how they act in concert to bring about borderline personality pathology. The present study investigated the prospective effects of diverse acute and chronic stressors, proband psychopathology, and maternal psychopathology on BPD features in a high-risk community sample (N = 700) of youth followed from mid-adolescence to young adulthood. Multivariate analyses revealed significant effects of maternal externalizing disorder history, offspring internalizing disorder history, family stressors, and school-related stressors on BPD risk. Contrary to expectations, no interactions between chronically stressful environmental conditions and personal characteristics in predicting borderline personality features were detected. Implications of these findings for etiological theories of BPD and early screening efforts are discussed.

  5. ALMA observation of high-z extreme star-forming environments discovered by Planck/Herschel

    NASA Astrophysics Data System (ADS)

    Kneissl, R.

    2015-05-01

    The Comic Microwave Background satellite Planck with its High Frequency Instrument has surveyed the mm/sub-mm sky in six frequency channels from 100 to 900 GHz. A sample of 228 cold sources of the Cosmic Infrared Background was observed in follow-up with Herschel SPIRE. The majority of sources appear to be over-densities of star-forming galaxies matching the size of high-z proto-cluster regions, while a 3% fraction are individual bright, lensed galaxies. A large observing program is underway with the aim of resolving the regions into the constituent members of the Planck sources. First ALMA data have been received on one Planck/Herschel proto-cluster candidate, showing the expected large over-abundance of bright mm/sub-mm sources within the cluster region. ALMA long baseline data of the brightest lensed galaxy in the sample with > 1 Jy at 350 μm are also forthcoming.

  6. The Impact of Kt/V Urea-Based Dialysis Adequacy on Quality of Life and Adherence in Haemodialysis Patients: A Cross-Sectional Study in Greece.

    PubMed

    Theofilou, Paraskevi; Togas, Constantinos; Vasilopoulou, Chrysoula; Minos, Christos; Zyga, Sofia; Tzitzikos, Giorgos

    2015-04-13

    There is clear evidence of a link between dialysis adequacy (as measured by urea kinetic modeling or urea reduction ratio) and such important clinical outcomes as morbidity and mortality. Evidence regarding the relationship between dialysis adequacy and quality of life (QOL) outcomes as well as adherence is less clear. The present paper is a study protocol which is planning to answer the following research question: what is the impact of dialysis adequacy on QOL and adherence in a sample of hemodialysis patients? The final sample size will be around 100 patients undergoing hemodialysis. Each subject's QOL and adherence will be measured using the following instruments: i) the Missoula-VITAS quality of life index 25; ii) the multidimensional scale of perceived social support and iii) the simplified medication adherence questionnaire. Dialysis adequacy is expected to be related to QOL and adherence scores.

  7. Analysis of irradiated U-7wt%Mo dispersion fuel microstructures using automated image processing

    DOE PAGES

    Collette, R.; King, J.; Buesch, C.; ...

    2016-04-01

    The High Performance Research Reactor Fuel Development (HPPRFD) program is responsible for developing low enriched uranium (LEU) fuel substitutes for high performance reactors fueled with highly enriched uranium (HEU) that have not yet been converted to LEU. The uranium-molybdenum (U-Mo) fuel system was selected for this effort. In this study, fission gas pore segmentation was performed on U-7wt%Mo dispersion fuel samples at three separate fission densities using an automated image processing interface developed in MATLAB. Pore size distributions were attained that showed both expected and unexpected fission gas behavior. In general, it proved challenging to identify any dominant trends whenmore » comparing fission bubble data across samples from different fuel plates due to varying compositions and fabrication techniques. Here, the results exhibited fair agreement with the fission density vs. porosity correlation developed by the Russian reactor conversion program.« less

  8. Analysis of irradiated U-7wt%Mo dispersion fuel microstructures using automated image processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collette, R.; King, J.; Buesch, C.

    The High Performance Research Reactor Fuel Development (HPPRFD) program is responsible for developing low enriched uranium (LEU) fuel substitutes for high performance reactors fueled with highly enriched uranium (HEU) that have not yet been converted to LEU. The uranium-molybdenum (U-Mo) fuel system was selected for this effort. In this study, fission gas pore segmentation was performed on U-7wt%Mo dispersion fuel samples at three separate fission densities using an automated image processing interface developed in MATLAB. Pore size distributions were attained that showed both expected and unexpected fission gas behavior. In general, it proved challenging to identify any dominant trends whenmore » comparing fission bubble data across samples from different fuel plates due to varying compositions and fabrication techniques. Here, the results exhibited fair agreement with the fission density vs. porosity correlation developed by the Russian reactor conversion program.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murase, Kenya, E-mail: murase@sahs.med.osaka-u.ac.jp; Song, Ruixiao; Hiratsuka, Samu

    We investigated the feasibility of visualizing blood coagulation using a system for magnetic particle imaging (MPI). A magnetic field-free line is generated using two opposing neodymium magnets and transverse images are reconstructed from the third-harmonic signals received by a gradiometer coil, using the maximum likelihood-expectation maximization algorithm. Our MPI system was used to image the blood coagulation induced by adding CaCl{sub 2} to whole sheep blood mixed with magnetic nanoparticles (MNPs). The “MPI value” was defined as the pixel value of the transverse image reconstructed from the third-harmonic signals. MPI values were significantly smaller for coagulated blood samples than thosemore » without coagulation. We confirmed the rationale of these results by calculating the third-harmonic signals for the measured viscosities of samples, with an assumption that the magnetization and particle size distribution of MNPs obey the Langevin equation and log-normal distribution, respectively. We concluded that MPI can be useful for visualizing blood coagulation.« less

  10. Dielectric and electrical characteristics of Sr modified Ca1Cu3Ti4O12

    NASA Astrophysics Data System (ADS)

    Sahu, M.; Choudhary, R. N. P.; Roul, B. K.

    2018-05-01

    This paper mainly reports on the effect of Sr substitution on dielectric and electrical properties of CaCu3Ti4O12 at different temperature and frequency. Preliminary analysis of X-ray diffraction data of sintered samples confirms the reported cubic structure. Study of surface morphology shows that the surface of the samples contains well-defined and uniformly distributed grains. Some electrical parameters (permittivity, tangent loss and impedance) of the materials were measured and analyzed over a wide range of temperature (25 to 315 °C) and frequency (50 to 2x106 Hz). The ultra high dielectric constant and low energy dissipation have been observed in the said experimental conditions of phase-pure prepared compounds. It is expected that the addition of nano-size compounds or oxide will help to enhance the above properties useful for fabrication of super-capacitor.

  11. Intercomparison of fog water samplers

    NASA Astrophysics Data System (ADS)

    Schell, Dieter; Georgii, Hans-Walter; Maser, Rolf; Jaeschke, Wolfgang; Arends, Beate G.; Kos, Gerard P. A.; Winkler, Peter; Schneider, Thomas; Berner, Axel; Kruisz, Christian

    1992-11-01

    During the Po Valley Fog Experiment 1989, two fogwater collectors were operated simultaneously at the ground and the results were compared to each other. The chemical analyses of the samples as well as the collection efficiencies showed remarkable differences between both collectors. Some differences in the solute concentrations in the samples of both collectors could be expected due to small differences in the 50-percent cut-off diameters. The large differences in the collection efficiencies however cannot be explained by these small variations of d sub 50, because normally only a small fraction of the water mass is concentrated in the size range of 5-7-micron droplets. It is shown that it is not sufficient to characterize a fogwater collector only by its cut-off diameter. The results of several wind tunnel calibration tests show that the collection efficiencies of the fogwater collectors are a function of windspeed and shape of the droplet spectra.

  12. Altruism relates to health in an ethnically diverse sample of older adults.

    PubMed

    Brown, William Michael; Consedine, Nathan S; Magai, Carol

    2005-05-01

    The existing literature indicates links between aspects of social network functioning and health outcomes. It is generally believed that networks that are larger or provide greater instrumental and emotional support contribute to improved health and, perhaps, greater longevity. Recently, it has been suggested that giving as well as receiving social support may be of benefit. On the basis of evolutionary theories of emotion and altruism, the current study sought to test this thesis in a large, ethnically diverse sample of community-dwelling older adults. As expected, levels of social support given were associated with lower morbidity, whereas levels of receiving were not. It is important that these relations held even when (a) socioeconomic status, education, marital status, age, gender, ethnicity, and (b) absolute network size and activity limitation were controlled for. Results are discussed in terms of their implications for theory regarding the relations among social exchanges, giving, and later life adaptation among older adults.

  13. Phylogenetic effective sample size.

    PubMed

    Bartoszek, Krzysztof

    2016-10-21

    In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. How do changes in suspended sediment concentration alone influence the size of mud flocs under steady turbulent shearing?

    NASA Astrophysics Data System (ADS)

    Tran, Duc; Kuprenas, Rachel; Strom, Kyle

    2018-04-01

    Modeling the size and settling velocity of sediment under the influence of flocculation is crucial for the accurate prediction of mud movement and deposition in sediment transport modeling of environments such as agricultural streams, large coastal rivers, estuaries, river plumes, and turbidity currents. Yet, collecting accurate and high resolution data on mud flocs is difficult. As a result, models that account for the influence of flocculation on mud settling velocity are based on sparse data that often present non-congruent relationship in floc properties with basic influencers of flocculations such as suspended sediment concentration. This study examines the influence of suspended sediment concentration on floc size populations within a turbulent suspension. Specifically, the work investigates: (1) the relationship between the equilibrium floc size and suspended sediment concentration under conditions of steady concentration and turbulent shearing; and (2) the speed at which mature flocs adapt to an unsteady drop in the concentration when turbulent shear is constant. Two sets of experiments were used to investigate the target processes. All work was conducted in laboratory mixing tanks using a floc camera and a newly developed image acquisition method. The new method allows for direct imaging and sizing of flocs within turbulent suspensions of clay in concentrations ranging from 15 to 400 mg/L, so that no transfer of the sample to another settling column or imaging tank is needed. The primary conclusions from the two sets of experiments are: (1) that the equilibrium floc size in an energetic turbulent suspension is linearly and positively related to concentration over the range of C = 50 to 400 mg/L, yet with a smaller-than-expected slope based on previous data and models from low-energy environments; and (2) that floc sizes decrease quickly (with a time lag on the order of 1-15 min) to time-varying decreases in concentration at turbulent shearing of G = 50s-1 . Overall the data illustrate that equilibrium floc size is a positive function of concentration, but that the rate of increase is weaker than expected. The data also suggest that approximating the size or settling velocity of some muds with a simple equilibrium model might be appropriate if the time steps of interest are on the order of 10 min or larger. The data also shows the importance of calibrating historic mud settling velocity equations for accurate predictions.

  15. Neurocognitive performance in family-based and case-control studies of schizophrenia

    PubMed Central

    Gur, Ruben C.; Braff, David L.; Calkins, Monica E.; Dobie, Dorcas J.; Freedman, Robert; Green, Michael F.; Greenwood, Tiffany A.; Lazzeroni, Laura C.; Light, Gregory A.; Nuechterlein, Keith H.; Olincy, Ann; Radant, Allen D.; Seidman, Larry J.; Siever, Larry J.; Silverman, Jeremy M.; Sprock, Joyce; Stone, William S.; Sugar, Catherine A.; Swerdlow, Neal R.; Tsuang, Debby W.; Tsuang, Ming T.; Turetsky, Bruce I.; Gur, Raquel E.

    2014-01-01

    Background Neurocognitive deficits in schizophrenia (SZ) are established and the Consortium on the Genetics of Schizophrenia (COGS) investigated such measures as endophenotypes in family-based (COGS-1) and case-control (COGS-2) studies. By requiring family participation, family-based sampling may result in samples that vary demographically and perform better on neurocognitive measures. Methods The Penn computerized neurocognitive battery (CNB) evaluates accuracy and speed of performance for several domains and was administered across sites in COGS-1 and COGS-2. Most tests were included in both studies. COGS-1 included 328 patients with SZ and 497 healthy comparison subjects (HCS) and COGS-2 included 1195 patients and 1009 HCS. Results Demographically, COGS-1 participants were younger, more educated, with more educated parents and higher estimated IQ compared to COGS-2 participants. After controlling for demographics, the two samples produced very similar performance profiles compared to their respective controls. As expected, performance was better and with smaller effect sizes compared to controls in COGS-1 relative to COGS-2. Better performance was most pronounced for spatial processing while emotion identification had large effect sizes for both accuracy and speed in both samples. Performance was positively correlated with functioning and negatively with negative and positive symptoms in both samples, but correlations were attenuated in COGS-2, especially with positive symptoms. Conclusions Patients ascertained through family-based design have more favorable demographics and better performance on some neurocognitive domains. Thus, studies that use case-control ascertainment may tap into populations with more severe forms of illness that are exposed to less favorable factors compared to those ascertained with family-based designs. PMID:25432636

  16. Technetium-99 and strontium-90: Abundance determination at ultratrace sensitivity by AMS as signatures of undeclared nuclear reprocessing activity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McAninch, J.E.; Proctor, I.D.

    1995-03-01

    The purpose of this White Paper is to examine the use of the ultratrace technique Accelerator Mass Spectrometry (AMS) to lower detection limits for {sup 99}Tc and {sup 90}Sr, and to examine the utility of these isotopes as signatures of a convert reprocessing facility. The International Atomic Energy Agency (IAEA) has committed to improving the effectiveness of the IAEA Safeguards System. This is in some degree a result of the discovery in 1991 of an undeclared Iraqi EMIS program. Recommendations from the March 1993 Consultants Group Meeting have resulted in several studies and follow on field trials to identify environmentalmore » signatures from covert nuclear fuel reprocessing activity. In particular, the April, 1993 reports of the Standing Advisory Group on Safeguards Implementation (SAGSI) identified the long-lived radioisotopes Technetium-99 and strontium-90 as two reliable signatures of fuel reprocessing activity. This report also suggested pathways in the chemical processing of irradiated fuel where these elements would be volatilized and potentially released in amounts detectable with ultratrace sensitivity techniques. Based on measured {sup 99}Tc background levels compiled from a variety of sources, it is estimated that AMS can provide 10% measurements of environmental levels of {sup 99}Tc in a few minutes using modestly sized samples: a few grams for soils, plants, or animal tissues; one to several liters for rain or seawater samples; and tens to hundreds of cubic meters for air sampling. Small sample sizes and high sample throughput result in significant increases in feasibility, cost effectiveness, and quality of data for a regional monitoring program. Similar results are expected for {sup 90}Sr.« less

  17. Practice makes proficient: pigeons (Columba livia) learn efficient routes on full-circuit navigational traveling salesperson problems.

    PubMed

    Baron, Danielle M; Ramirez, Alejandro J; Bulitko, Vadim; Madan, Christopher R; Greiner, Ariel; Hurd, Peter L; Spetch, Marcia L

    2015-01-01

    Visiting multiple locations and returning to the start via the shortest route, referred to as the traveling salesman (or salesperson) problem (TSP), is a valuable skill for both humans and non-humans. In the current study, pigeons were trained with increasing set sizes of up to six goals, with each set size presented in three distinct configurations, until consistency in route selection emerged. After training at each set size, the pigeons were tested with two novel configurations. All pigeons acquired routes that were significantly more efficient (i.e., shorter in length) than expected by chance selection of the goals. On average, the pigeons also selected routes that were more efficient than expected based on a local nearest-neighbor strategy and were as efficient as the average route generated by a crossing-avoidance strategy. Analysis of the routes taken indicated that they conformed to both a nearest-neighbor and a crossing-avoidance strategy significantly more often than expected by chance. Both the time taken to visit all goals and the actual distance traveled decreased from the first to the last trials of training in each set size. On the first trial with novel configurations, average efficiency was higher than chance, but was not higher than expected from a nearest-neighbor or crossing-avoidance strategy. These results indicate that pigeons can learn to select efficient routes on a TSP problem.

  18. The endothelial sample size analysis in corneal specular microscopy clinical examinations.

    PubMed

    Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci

    2012-05-01

    To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.

  19. Annual reversible plasticity of feeding structures: cyclical changes of jaw allometry in a sea urchin

    PubMed Central

    Ebert, Thomas A.; Hernández, José Carlos; Clemente, Sabrina

    2014-01-01

    A wide variety of organisms show morphologically plastic responses to environmental stressors but in general these changes are not reversible. Though less common, reversible morphological structures are shown by a range of species in response to changes in predators, competitors or food. Theoretical analysis indicates that reversible plasticity increases fitness if organisms are long-lived relative to the frequency of changes in the stressor and morphological changes are rapid. Many sea urchin species show differences in the sizes of jaws (demi-pyramids) of the feeding apparatus, Aristotle's lantern, relative to overall body size, and these differences have been correlated with available food. The question addressed here is whether reversible changes of relative jaw size occur in the field as available food changes with season. Monthly samples of the North American Pacific coast sea urchin Strongylocentrotus purpuratus were collected from Gregory Point on the Oregon (USA) coast and showed an annual cycle of relative jaw size together with a linear trend from 2007 to 2009. Strongylocentrotus purpuratus is a long-lived species and under field conditions individuals experience multiple episodes of changes in food resources both seasonally and from year to year. Their rapid and reversible jaw plasticity fits well with theoretical expectations. PMID:24500161

  20. EXACT DISTRIBUTIONS OF INTRACLASS CORRELATION AND CRONBACH'S ALPHA WITH GAUSSIAN DATA AND GENERAL COVARIANCE.

    PubMed

    Kistner, Emily O; Muller, Keith E

    2004-09-01

    Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact results allow calculating the exact distribution function and other properties of intraclass correlation and Cronbach's alpha, for Gaussian data with any covariance pattern, not just compound symmetry. Probabilities are computed in terms of the distribution function of a weighted sum of independent chi-square random variables. New F approximations for the distribution functions of intraclass correlation and Cronbach's alpha are much simpler and faster to compute than the exact forms. Assuming the covariance matrix is known, the approximations typically provide sufficient accuracy, even with as few as ten observations. Either the exact or approximate distributions may be used to create confidence intervals around an estimate of reliability. Monte Carlo simulations led to a number of conclusions. Correctly assuming that the covariance matrix is compound symmetric leads to accurate confidence intervals, as was expected from previously known results. However, assuming and estimating a general covariance matrix produces somewhat optimistically narrow confidence intervals with 10 observations. Increasing sample size to 100 gives essentially unbiased coverage. Incorrectly assuming compound symmetry leads to pessimistically large confidence intervals, with pessimism increasing with sample size. In contrast, incorrectly assuming general covariance introduces only a modest optimistic bias in small samples. Hence the new methods seem preferable for creating confidence intervals, except when compound symmetry definitely holds.

  1. Accounting for twin births in sample size calculations for randomised trials.

    PubMed

    Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J

    2018-05-04

    Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.

  2. Durability Assessment of Gamma Tial

    NASA Technical Reports Server (NTRS)

    Draper, Susan L.; Lerch, Bradley A.; Pereira, J. Michael; Miyoshi, Kazuhisa; Arya, Vinod K.; Zhuang, Wyman

    2004-01-01

    Gamma TiAl was evaluated as a candidate alloy for low-pressure turbine blades in aeroengines. The durability of g-TiAl was studied by examining the effects of impact or fretting on its fatigue strength. Cast-to-size Ti-48Al-2Cr-2Nb was studied in impact testing with different size projectiles at various impact energies as the reference alloy and subsequently fatigue tested. Impacting degraded the residual fatigue life. However, under the ballistic impact conditions studied, it was concluded that the impacts expected in an aeroengine would not result in catastrophic damage, nor would the damage be severe enough to result in a fatigue failure under the anticipated design loads. In addition, other gamma alloys were investigated including another cast-to-size alloy, several cast and machined specimens, and a forged alloy. Within this Ti-48-2-2 family of alloys aluminum content was also varied. The cracking patterns as a result of impacting were documented and correlated with impact variables. The cracking type and severity was reasonably predicted using finite element models. Mean stress affects were also studied on impact-damaged fatigue samples. The fatigue strength was accurately predicted based on the flaw size using a threshold-based, fracture mechanics approach. To study the effects of wear due to potential applications in a blade-disk dovetail arrangement, the machined Ti-47-2-2 alloy was fretted against In-718 using pin-on-disk experiments. Wear mechanisms were documented and compared to those of Ti-6Al-4V. A few fatigue samples were also fretted and subsequently fatigue tested. It was found that under the conditions studied, the fretting was not severe enough to affect the fatigue strength of g-TiAl.

  3. A Bayesian Approach to the Overlap Analysis of Epidemiologically Linked Traits.

    PubMed

    Asimit, Jennifer L; Panoutsopoulou, Kalliope; Wheeler, Eleanor; Berndt, Sonja I; Cordell, Heather J; Morris, Andrew P; Zeggini, Eleftheria; Barroso, Inês

    2015-12-01

    Diseases often cooccur in individuals more often than expected by chance, and may be explained by shared underlying genetic etiology. A common approach to genetic overlap analyses is to use summary genome-wide association study data to identify single-nucleotide polymorphisms (SNPs) that are associated with multiple traits at a selected P-value threshold. However, P-values do not account for differences in power, whereas Bayes' factors (BFs) do, and may be approximated using summary statistics. We use simulation studies to compare the power of frequentist and Bayesian approaches with overlap analyses, and to decide on appropriate thresholds for comparison between the two methods. It is empirically illustrated that BFs have the advantage over P-values of a decreasing type I error rate as study size increases for single-disease associations. Consequently, the overlap analysis of traits from different-sized studies encounters issues in fair P-value threshold selection, whereas BFs are adjusted automatically. Extensive simulations show that Bayesian overlap analyses tend to have higher power than those that assess association strength with P-values, particularly in low-power scenarios. Calibration tables between BFs and P-values are provided for a range of sample sizes, as well as an approximation approach for sample sizes that are not in the calibration table. Although P-values are sometimes thought more intuitive, these tables assist in removing the opaqueness of Bayesian thresholds and may also be used in the selection of a BF threshold to meet a certain type I error rate. An application of our methods is used to identify variants associated with both obesity and osteoarthritis. © 2015 The Authors. *Genetic Epidemiology published by Wiley Periodicals, Inc.

  4. Assessing the role of detrital zircon sorting on provenance interpretations in an ancient fluvial system using paleohydraulics - Permian Cutler Group, Paradox Basin, Utah and Colorado

    NASA Astrophysics Data System (ADS)

    Findlay, C. P., III; Ewing, R. C.; Perez, N. D.

    2017-12-01

    Detrital zircon age signatures used in provenance studies are assumed to be representative of entire catchments from which the sediment was derived, but the extent to which hydraulic sorting can bias provenance interpretations is poorly constrained. Sediment and mineral sorting occurs with changes in hydraulic conditions driven by both allogenic and autogenic processes. Zircon is sorted from less dense minerals due to the difference in density, and any age dependence on zircon size could potentially bias provenance interpretations. In this study, a coupled paleohydraulic and geochemical provenance approach is used to identify changes in paleohydraulic conditions and relate them to spatial variations in provenance signatures from samples collected along an approximately time-correlative source-to-sink pathway in the Permian Cutler Group of the Paradox Basin. Samples proximal to the uplift have a paleoflow direction to the southwest. In the medial basin, paleocurrent direction indicates salt movement caused fluvial pathways divert to the north and northwest on the flanks of anticlines. Channel depth, flow velocity, and discharge calculations were derived from field measurements of grain size and dune and bar cross-stratification indicate that competency of the fluvial system decreased from proximal to the medial basin by up to a factor of 12. Based upon the paleohydraulic calculations, zircon size fractionation would occur along the transect such that the larger zircons are removed from the system prior to reaching the medial basin. Analysis of the size and age distribution of zircons from the proximal and distal fluvial system of the Cutler Group tests if this hydraulic sorting affects the expected Uncompahgre Uplift age distribution.

  5. ChemCam Passive Sky Spectroscopy at Gale Crater, Mars: Interannual Variability in Dust Aerosol Particle Size, Missing Water Vapor, and the Molecular Oxygen Problem

    NASA Astrophysics Data System (ADS)

    McConnochie, T. H.; Smith, M. D.; Wolff, M. J.; Bender, S. C.; Lemmon, M. T.; Wiens, R. C.; Maurice, S.; Gasnault, O.; Lasue, J.; Meslin, P. Y.; Harri, A. M.; Genzer, M.; Kemppinen, O.; Martinez, G.; DeFlores, L. P.; Blaney, D. L.; Johnson, J. R.; Bell, J. F., III; Trainer, M. G.; Lefèvre, F.; Atreya, S. K.; Mahaffy, P. R.; Wong, M. H.; Franz, H. B.; Guzewich, S.; Villanueva, G. L.; Khayat, A. S.

    2017-12-01

    The Mars Science Laboratory's (MSL) ChemCam spectrometer measures atmospheric aerosol properties and gas abundances by operating in passive mode and observing scattered sky light at two different elevation angles. We have previously [e. g. 1, 2] presented the methodology and results of these ChemCam Passive Sky observations. Here we will focus on three of the more surprising results that we have obtained: (1) depletion of the column water vapor at Gale Crater relative to that of the surrounding region combined with a strong enhancement of the local column water vapor relative to pre-dawn in-situ measurements, (2) an interannual change in the effective particle size of dust aerosol during the aphelion season, and (3) apparent seasonal and interannual variability in molecular oxygen that differs significantly from the expected behavior of a non-condensable trace gas and differs significantly from global climate model expectations. The ChemCam passive sky water vapor measurements are quite robust but their interpretation depends on the details of measurements as well as on the types of water vapor vertical distributions that can be produced by climate models. We have a high degree of confidence in the dust particle size changes but since aerosol results in general are subject to a variety of potential systematic effects our particle size results would benefit from confirmation by other techniques [c.f. 3]. For the ChemCam passive sky molecular oxygen results we are still working to constrain the uncertainties well enough to confirm the observed surprising behavior, motivated by similarly surprising atmospheric molecular oxygen variability observed by MSL's Sample Analysis at Mars (SAM) instrument [4]. REFERENCES: [1] McConnochie, et al. (2017), Icarus (submitted). [2] McConnochie, et al. (2017), abstract # 3201, The 6th International Workshop on the Mars Atmosphere: Granada, Spain. [3] Vicente-Retortillo et al. (2017), GRL, 44. [4] Trainer et al. (2017), 2017 AGU Fall Meeting.

  6. Sample size determination for mediation analysis of longitudinal data.

    PubMed

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  7. Drought sensitivity predicts habitat size sensitivity in an aquatic ecosystem.

    PubMed

    Amundrud, Sarah L; Srivastava, Diane S

    2015-07-01

    Species and trophic richness often increase with habitat size. Although many ecological processes have been evoked to explain both patterns, the environmental stress associated with small habitats has rarely been considered. We propose that larger habitats may be species rich simply because their environmental conditions are within the fundamental niche of more species; larger habitats may also have more trophic levels if traits of predators render them vulnerable to environmental stress. We test this hypothesis using the aquatic insect larvae in water-filled bromeliads. In bromeliads, the probability of desiccation is greatest in small plants. For the 10 most common bromeliad insect taxa, we ask whether differences in drought tolerance and regional abundances between taxa predict community and trophic composition over a gradient of bromeliad size. First, we used bromeliad survey data to calculate the mean habitat size of occurrence of each taxon. Comparing the observed mean habitat size of occurrence to that expected from random species assembly based on differences in their regional abundances allowed us to obtain habitat size sensitivity indices (as Z scores) for the various insect taxa. Second, we obtained drought sensitivity indices by subjecting individual insects to drought and measuring the effects on relative growth rates in a mesocosm experiment. We found that drought sensitivity strongly, predicts habitat size sensitivity in bromeliad insects. However, an increase in trophic richness with habitat size could not be explained by an increased sensitivity of predators to drought, but rather by sampling effects, as predators were rare compared to lower trophic levels. This finding suggests that physiological tolerance to environmental stress can be relevant in explaining the universal increase in species with habitat size.

  8. Public Opinion Polls, Chicken Soup and Sample Size

    ERIC Educational Resources Information Center

    Nguyen, Phung

    2005-01-01

    Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.

  9. Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.

    PubMed

    Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto

    2007-07-01

    To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.

  10. Temporal polyethism, life expectancy, and entropy of workers of the ant Ectatomma vizottoi Almeida, 1987 (Formicidae: Ectatomminae).

    PubMed

    Santana Vieira, Alexsandro; Desidério Fernandes, Wedson; Fernando Antonialli-Junior, William

    2010-05-01

    We investigated the changes in the behavioral repertoire over the course of life and determined the life expectancy and entropy of workers of the ant Ectatomma vizottoi. Newly emerged ants were individually marked with model airplane paint for observation of behaviors and determination of the age and life expectancy. Ants were divided into two groups: young and old workers. The 36 behaviors observed were divided into eight categories. Workers exhibit a clear division of tasks throughout their lives, with young workers performing more tasks inside the colony and old workers, outside, unlike species that have small colonies. This species also exhibits an intermediate life expectancy compared to workers of other species that are also intermediary in size. This supports the hypothesis of a relationship between size and maximum life expectancy, but it also suggests that other factors may also be acting in concert. Entropy value shows a high mortality rate during the first life intervals.

  11. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    PubMed

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.

  12. Starburst to Quiescent from HST/ALMA: Stars and Dust Unveil Minor Mergers in Submillimeter Galaxies at z ∼ 4.5

    NASA Astrophysics Data System (ADS)

    Gómez-Guijarro, C.; Toft, S.; Karim, A.; Magnelli, B.; Magdis, G. E.; Jiménez-Andrade, E. F.; Capak, P. L.; Fraternali, F.; Fujimoto, S.; Riechers, D. A.; Schinnerer, E.; Smolčić, V.; Aravena, M.; Bertoldi, F.; Cortzen, I.; Hasinger, G.; Hu, E. M.; Jones, G. C.; Koekemoer, A. M.; Lee, N.; McCracken, H. J.; Michałowski, M. J.; Navarrete, F.; Pović, M.; Puglisi, A.; Romano-Díaz, E.; Sheth, K.; Silverman, J. D.; Staguhn, J.; Steinhardt, C. L.; Stockmann, M.; Tanaka, M.; Valentino, F.; van Kampen, E.; Zirm, A.

    2018-04-01

    Dust-enshrouded, starbursting, submillimeter galaxies (SMGs) at z ≥ 3 have been proposed as progenitors of z ≥ 2 compact quiescent galaxies (cQGs). To test this connection, we present a detailed spatially resolved study of the stars, dust, and stellar mass in a sample of six submillimeter-bright starburst galaxies at z ∼ 4.5. The stellar UV emission probed by HST is extended and irregular and shows evidence of multiple components. Informed by HST, we deblend Spitzer/IRAC data at rest-frame optical, finding that the systems are undergoing minor mergers with a typical stellar mass ratio of 1:6.5. The FIR dust continuum emission traced by ALMA locates the bulk of star formation in extremely compact regions (median r e = 0.70 ± 0.29 kpc), and it is in all cases associated with the most massive component of the mergers (median {log}({M}* /{M}ȯ )=10.49+/- 0.32). We compare spatially resolved UV slope (β) maps with the FIR dust continuum to study the infrared excess (IRX = L IR/L UV)–β relation. The SMGs display systematically higher IRX values than expected from the nominal trend, demonstrating that the FIR and UV emissions are spatially disconnected. Finally, we show that the SMGs fall on the mass–size plane at smaller stellar masses and sizes than the cQGs at z = 2. Taking into account the expected evolution in stellar mass and size between z = 4.5 and z = 2 due to the ongoing starburst and mergers with minor companions, this is in agreement with a direct evolutionary connection between the two populations.

  13. Body shape and size depictions of African American women in JET magazine, 1953-2006.

    PubMed

    Dawson-Andoh, Nana A; Gray, James J; Soto, José A; Parker, Scott

    2011-01-01

    Depictions of Caucasian women in the mainstream media have become increasingly thinner in size and straighter in shape. These changes may be inconsistent with the growing influence of African American beauty ideals, which research has established as more accepting of larger body sizes and more curvaceous body types than Caucasians. The present study looked at trends in the portrayal of African American women featured in JET magazine from 1953 to 2006. Beauty of the Week (BOW) images were collected and analyzed to examine body size (estimated by independent judges) and body shape (estimated by waist-to-hip ratio). We expected body sizes to increase and body shapes to become more curvaceous. Results revealed a rise in models' body size consistent with expectations, but an increase in waist-to-hip ratio, contrary to prediction. Our findings suggest that the African American feminine beauty ideal reflects both consistencies with and departures from mainstream cultural ideals. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. Large Portions Encourage the Selection of Palatable Rather Than Filling Foods.

    PubMed

    Brunstrom, Jeffrey M; Jarvstad, Andreas; Griggs, Rebecca L; Potter, Christina; Evans, Natalie R; Martin, Ashley A; Brooks, Jon Cw; Rogers, Peter J

    2016-10-01

    Portion size is an important driver of larger meals. However, effects on food choice remain unclear. Our aim was to identify how portion size influences the effect of palatability and expected satiety on choice. In Study 1, adult participants (n = 24, 87.5% women) evaluated the palatability and expected satiety of 5 lunchtime meals and ranked them in order of preference. Separate ranks were elicited for equicaloric portions from 100 to 800 kcal (100-kcal steps). In Study 2, adult participants (n = 24, 75% women) evaluated 9 meals and ranked 100-600 kcal portions in 3 contexts (scenarios), believing that 1) the next meal would be at 1900, 2) they would receive only a bite of one food, and 3) a favorite dish would be offered immediately afterwards. Regression analysis was used to quantify predictors of choice. In Study 1, the extent to which expected satiety and palatability predicted choice was highly dependent on portion size (P < 0.001). With smaller portions, expected satiety was a positive predictor, playing a role equal to palatability (100-kcal portions: expected satiety, β: 0.42; palatability, β: 0.46). With larger portions, palatability was a strong predictor (600-kcal portions: β: 0.53), and expected satiety was a poor or negative predictor (600-kcal portions: β: -0.42). In Study 2, this pattern was moderated by context (P = 0.024). Results from scenario 1 replicated Study 1. However, expected satiety was a poor predictor in both scenario 2 (expected satiety was irrelevant) and scenario 3 (satiety was guaranteed), and palatability was the primary driver of choice across all portions. In adults, expected satiety influences food choice, but only when small equicaloric portions are compared. Larger portions not only promote the consumption of larger meals, but they encourage the adoption of food choice strategies motivated solely by palatability. © 2016 American Society for Nutrition.

  15. Paleointensity results for 0 and 4 ka from Hawaiian lava flows: a new approach to sampling

    NASA Astrophysics Data System (ADS)

    Cromwell, G.; Tauxe, L.; Staudigel, H.; Ron, H.; Trusdell, F.

    2012-04-01

    Paleointensity data are typically generated from core samples drilled out of the massive parts of lava flows. During Thellier-Thellier type experiments, these massive samples suffer from very low success rates (~20%), as shown by failure to meet statistical criteria. Low success generally occurs for two reasons: 1) alteration of the sample during the heating process, and 2) multi-domain behavior of massive material. Moreover, recent studies of historical lava flows show that massive samples may not accurately reflect the intensity of the magnetic field even when they are successful (Valet et al., 2010). Alternatively, submarine basaltic glasses (SBG) produce high success rates (~80%) for Thellier-Thellier type experiments, likely due to near instantaneous cooling rates which produce single-domain magnetic grains. In addition, SBG have been proven to produce accurate records of the magnetic field (e.g., Pick and Tauxe, 1993). In this study we investigate the success of paleointensity experiments on subaerial quenched basalts from Hawaii in the quest for single domain, rapidly cooled subaerial analogs to SBG. We also examine the effects of grain size and cooling rate on the accuracy of paleointensity results. During March 2011, we collected samples from 31 dated lava flows (0-3800 BP), including the historical 1950 C.E. and 2010 C.E. flows. Each lava flow was additionally subsampled when unique cooling structures within the unit could be identified. Single-domain, rapidly quenched glasses from the 1950 and 2010 flows are ideally behaved, i.e. straight Arai plots, and accurately record the expected geomagnetic field strength. However, slower cooled specimens from the same flows produce sagged Arai plots and consistently underestimate expected geomagnetic field intensity. Results from ideally behaved glasses over the last 4 ka indicate periods of rapid field change in Hawaii and a possible high intensity field spike around 2.7 ka. We will present new results from our comprehensive data set of Hawaii paleointensity on about the last 4 ka.

  16. Parking Lot Runoff Quality and Treatment Efficiency of a Stormwater-Filtration Device, Madison, Wisconsin, 2005-07

    USGS Publications Warehouse

    Horwatich, Judy A.; Bannerman, Roger T.

    2010-01-01

    To evaluate the treatment efficiency of a stormwater-filtration device (SFD) for potential use at Wisconsin Department of Transportation (WisDOT) park-and-ride facilities, a SFD was installed at an employee parking lot in downtown Madison, Wisconsin. This type of parking lot was chosen for the test site because the constituent concentrations and particle-size distributions (PSDs) were expected to be similar to those of a typical park-and-ride lot operated by WisDOT. The objective of this particular installation was to reduce loads of total suspended solids (TSS) in stormwater runoff to Lake Monona. This study also was designed to provide a range of treatment efficiencies expected for a SFD. Samples from the inlet and outlet were analyzed for 33 organic and inorganic constituents, including 18 polycyclic aromatic hydrocarbons (PAHs). Samples were also analyzed for physical properties, including PSD. Water-quality samples were collected for 51 runoff events from November 2005 to August 2007. Samples from all runoff events were analyzed for concentrations of suspended sediment (SS). Samples from 31 runoff events were analyzed for 15 constituents, samples from 15 runoff events were analyzed for PAHs, and samples from 36 events were analyzed for PSD. The treatment efficiency of the SFD was calculated using the summation of loads (SOL) and the efficiency ratio methods. Constituents for which the concentrations and (or) loads were decreased by the SFD include TSS, SS, volatile suspended solids, total phosphorous (TP), total copper, total zinc, and PAHs. The efficiency ratios for these constituents are 45, 37, 38, 55, 22, 5, and 46 percent, respectively. The SOLs for these constituents are 32, 37, 28, 36, 23, 8, and 48 percent, respectively. The SOL for chloride was -21 and the efficiency ratio was -18. Six chemical constituents or properties-dissolved phosphorus, chemical oxygen demand, dissolved zinc, total dissolved solids, dissolved chemical oxygen demand, and dissolved copper-were not included in the efficiency or SOL, because the difference between concentrations in samples from the inlet and outlet were not significant. Concentrations of TP and TSS were inexplicably high in samples at the inlet for one event.

  17. MEASURING TRANSIT SIGNAL RECOVERY IN THE KEPLER PIPELINE. I. INDIVIDUAL EVENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christiansen, Jessie L.; Clarke, Bruce D.; Burke, Christopher J.

    The Kepler mission was designed to measure the frequency of Earth-size planets in the habitable zone of Sun-like stars. A crucial component for recovering the underlying planet population from a sample of detected planets is understanding the completeness of that sample-the fraction of the planets that could have been discovered in a given data set that actually were detected. Here, we outline the information required to determine the sample completeness, and describe an experiment to address a specific aspect of that question, i.e., the issue of transit signal recovery. We investigate the extent to which the Kepler pipeline preserves individualmore » transit signals by injecting simulated transits into the pixel-level data, processing the modified pixels through the pipeline, and comparing the measured transit signal-to-noise ratio (S/N) to that expected without perturbation by the pipeline. We inject simulated transit signals across the full focal plane for a set of observations for a duration of 89 days. On average, we find that the S/N of the injected signal is recovered at MS = 0.9973({+-} 0.0012) Multiplication-Sign BS - 0.0151({+-} 0.0049), where MS is the measured S/N and BS is the baseline, or expected, S/N. The 1{sigma} width of the distribution around this correlation is {+-}2.64%. This indicates an extremely high fidelity in reproducing the expected detection statistics for single transit events, and provides teams performing their own periodic transit searches the confidence that there is no systematic reduction in transit signal strength introduced by the pipeline. We discuss the pipeline processes that cause the measured S/N to deviate significantly from the baseline S/N for a small fraction of targets; these are primarily the handling of data adjacent to spacecraft re-pointings and the removal of harmonics prior to the measurement of the S/N. Finally, we outline the further work required to characterize the completeness of the Kepler pipeline.« less

  18. Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.

    PubMed

    Rochon, K; Scoles, G A; Lysyk, T J

    2012-03-01

    A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.

  19. Simple, Defensible Sample Sizes Based on Cost Efficiency

    PubMed Central

    Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.

    2009-01-01

    Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055

  20. RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.

    PubMed

    Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu

    2018-05-30

    One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.

Top